JPA Archives - Piotr's TechBlog https://piotrminkowski.com/tag/jpa/ Java, Spring, Kotlin, microservices, Kubernetes, containers Wed, 15 May 2024 11:55:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 JPA Archives - Piotr's TechBlog https://piotrminkowski.com/tag/jpa/ 32 32 181738725 Using Bootify for a Spring Boot Prototype with Thymeleaf https://piotrminkowski.com/2024/05/15/using-bootify-for-a-spring-boot-prototype-with-thymeleaf/ https://piotrminkowski.com/2024/05/15/using-bootify-for-a-spring-boot-prototype-with-thymeleaf/#comments Wed, 15 May 2024 11:55:57 +0000 https://piotrminkowski.com/?p=15250 Bootify is an application generator for Spring Boot prototypes. How can we use this tool to build a Spring Boot CRUD app with Postgres as a database and Thymeleaf as a frontend? Although Thymeleaf almost has an epical age and nowadays many web applications are built as SPA, the library still serves its purpose extremely well. To make […]

The post Using Bootify for a Spring Boot Prototype with Thymeleaf appeared first on Piotr's TechBlog.

]]>
Bootify is an application generator for Spring Boot prototypes. How can we use this tool to build a Spring Boot CRUD app with Postgres as a database and Thymeleaf as a frontend?

Although Thymeleaf almost has an epical age and nowadays many web applications are built as SPA, the library still serves its purpose extremely well. To make the user experience a little smoother, we also want to use HTMX with the boost function. We will create a simple database schema to manage books, authors and readers. Huge thanks to Piotr for the very kind opportunity to briefly introduce Bootify here.

You can find several other articles on this blog about Spring Boot. For example, to learn how to build microservices with Spring Boot 3 and Spring Cloud read the following article.

Configuration of our application

For our new project, we go to Bootify.io and click on the “Start project” link. Our new project can now be accessed via its unique URL. There we can first define the general properties for our small demo.

spring-boot-bootify-general

We have selected Gradle as the build tool and Java together with Lombok as the language. Thymeleaf with Bootstrap is active as the frontend stack – with this option, Bootstrap and HTMX are integrated as WebJars. Alternatively, a Webpack setup, Tailwind CSS, or Angular are also available for the frontend. Bootify adds the necessary dependencies for our selected database and also takes into account special requirements of the JPA classes, e.g. for JSON or UUID types.

Further down in the developer preferences, we also activate the option for Docker compose – this will include the spring-boot-docker-compose library and prepare the docker-compose.yml so that a Docker container with Postgres is automatically used when our application is started locally. This significantly shortens the preparation time for every new developer and helps with maintenance in the long term.

We can now create our database schema in the Entity tab. For each of the three entities, we activate the CRUD option for the frontend so that the corresponding functionality is directly available.

spring-boot-bootify-entities

There is an M:N relation between a book and a reader, so we create a many-to-many relation here. For a real application, we might create a dedicated entity, which makes customizations and extensions easier later on.

Code Review and Start

When we are happy with the code in the preview, we download the application and create a new project for it in IntelliJ. It will look something like this.

spring-boot-bootify-intellij

By right-clicking on the BookManagerApplication class, we can create our own run configuration. There we enabled the VM option below Modify options so that we can store an additional parameter. With -Dspring.profiles.active=local we start our application with the local profile. The Lombok plugin is automatically recommended by IntelliJ and must be activated. Another interesting option is hidden in the settings under Build, Execution, Deployment > Build Tools > Gradle – there we can switch from Gradle to IntelliJ for the execution of our application, which gives us better insight into the logs.

To start the application, Docker should be available on the system so that the container for Postgres can be started. The spring.docker.compose.lifecycle-management=start-only property in our application.yml keeps the container running even after the application is stopped – it can be shut down manually if necessary.

The local profile activates the LocalDevConfig class and ensures that changes to the Thymeleaf templates are immediately available in the browser. The forms are generated via the inputRow fragment to avoid repetitive code. The hx-boost flag on the body element makes HTMX perform every action in the browser via Ajax.

Conclusion

With the available options in Bootify, each Spring Boot prototype can be configured so that it really only contains the code we want to have. At the same time, you can start a new project with a modern tech stack and many helpful features. I hope this article has given you a good introduction and that Bootify can help you with your future projects!

The post Using Bootify for a Spring Boot Prototype with Thymeleaf appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/05/15/using-bootify-for-a-spring-boot-prototype-with-thymeleaf/feed/ 4 15250
Useful & Unknown Java Libraries https://piotrminkowski.com/2023/01/30/useful-unknown-java-libraries/ https://piotrminkowski.com/2023/01/30/useful-unknown-java-libraries/#comments Mon, 30 Jan 2023 09:39:23 +0000 https://piotrminkowski.com/?p=13954 This article will teach you about some not famous but useful Java libraries. This is the second article in the “useful & unknown” series. The previous one described several attractive, but not well-known Java features. You can read more about it here. Today we will focus on Java libraries. Usually, we use several external libraries […]

The post Useful & Unknown Java Libraries appeared first on Piotr's TechBlog.

]]>
This article will teach you about some not famous but useful Java libraries. This is the second article in the “useful & unknown” series. The previous one described several attractive, but not well-known Java features. You can read more about it here.

Today we will focus on Java libraries. Usually, we use several external libraries in our projects – even if we do not include them directly. For example, Spring Boot comes with a defined set of dependencies included by starters. Assuming we include e.g. spring-boot-starter-test we include libraries like mockito, junit-jupiter or hamcrest at the same time. Of course, these are well-known libraries for the community.

In fact, there are a lot of different Java libraries. Usually, I don’t need to use many of them (or even I need none of them) when working with the frameworks like Spring Boot or Quarkus. However, there are some very interesting libraries that may be useful everywhere. I’m writing about them because you might not hear about any of them. I’m going to introduce 5 of my favorite “useful & unknown” Java libraries. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that you need to clone my GitHub repository. Yo can also find the example Then you should just follow my instructions.

Instancio

On the first fire will go Instancio. How do you generate test data in your unit tests? Instancio will help us with that. It aims to reduce the time and lines of code spent on manual data setup in unit tests. It instantiates and populates objects with random data, making our tests more dynamic. We can generate random data with Instancio but at the same, we can set custom data in a particular field.

Before we start with Instancio, let’s discuss our data model. Here’s the first class – Person:

public class Person {

   private Long id;
   private String name;
   private int age;
   private Gender gender;
   private Address address;

   // getters and setters ...

}

Our class contains three simple fields (id, name, age), a single enum Gender , and the instance of the Address class. Gender is just a simple enum containing MALE and FEMALE values. Here’s the implementation of the Address class:

public class Address {

   private String country;
   private String city;
   private String street;
   private int houseNumber;
   private int flatNumber;

   // getters and setters ...

}

Now, let’s create a test to check whether the Person service will successfully add and obtain objects from the store. We want to generate random data for all the fields except the id field, which is set by the service. Here’s our test:

@Test
void addAndGet() {
   Person person = Instancio.of(Person.class)
             .ignore(Select.field(Person::getId))
             .create();
   person = personService.addPerson(person);
   Assertions.assertNotNull(person.getId());
   person = personService.findById(person.getId());
   Assertions.assertNotNull(person);
   Assertions.assertNotNull(person.getAddress());
}

The values generated for my test run are visible below. As you see, the id field equals null. Other fields contain random values generated per the field type (String or int).

Person(id=null, name=ATDLCA, age=2619, gender=MALE, 
address=Address(country=FWOFRNT, city=AIRICCHGGG, street=ZZCIJDZ, houseNumber=5530, flatNumber=1671))

Let’s see how we can generate several objects with Instancio. Assuming we need 5 objects in the list for our test, we can do that in the following way. We will also set a constant value for the city fields inside the Address object. Then we would like to test the method for searching objects by the city name.

@Test
void addListAndGet() {
   final int numberOfObjects = 5;
   final String city = "Warsaw";
   List<Person> persons = Instancio.ofList(Person.class)
           .size(numberOfObjects)
           .set(Select.field(Address::getCity), city)
           .create();
   personService.addPersons(persons);
   persons = personService.findByCity(city);
   Assertions.assertEquals(numberOfObjects, persons.size());
}

Let’s take a look at the last example. The same as before, we are generating a list of objects – this time 100. We can easily specify the additional criteria for generated values. For example, I would like to set a value for the age field between 18 and 65.

@Test
void addGeneratorAndGet() {
   List<Person> persons = Instancio.ofList(Person.class)
            .size(100)
            .ignore(Select.field(Person::getId))
            .generate(Select.field(Person::getAge), 
                      gen -> gen.ints().range(18, 65))
            .create();
   personService.addPersons(persons);
   persons = personService.findAllGreaterThanAge(40);
   Assertions.assertTrue(persons.size() > 0);
}

That’s just a small set of customizations, that Instancio offers, for test data generation. You can read more about other options in their docs.

Datafaker

The next library we will discuss today is Datafaker. The purpose of this library is quite similar to the previous one. We need to generate random data. However, this time we need data that looks like real data. From my perspective, it is useful for demo presentations or examples running somewhere.

Datafaker creates fake data for your JVM programs within minutes, using our wide range of more than 100 data providers. This can be very helpful when generating test data to fill a database, generating data for a stress test, or anonymizing data from production services. Let’s include it in our dependencies.

<dependency>
  <groupId>net.datafaker</groupId>
  <artifactId>datafaker</artifactId>
  <version>1.7.0</version>
</dependency>

We will expand our sample model a little. Here’s a new class definition. The Contact class contains two fields email and phoneNumber. We will validate both these fields using the jakarta.validation module.

public class Contact {

   @Email
   private String email;
   @Pattern(regexp="\\d{2}-\\d{3}-\\d{2}-\\d{2}")
   private String phoneNumber;

   // getters and setters ...

}

Here’s a new version of our Person class that contains the Contant object instance:

public class Person {

   private Long id;
   private String name;
   private int age;
   private Gender gender;
   private Address address;
   @Valid
   private Contact contact;

   // getters and setters ...

}

Now, let’s generate fake data for the Person object. We can create localized data just by setting the Locale object in the Faker constructor (1). For me, it is Poland 🙂 There are a lot of providers for the standard values. To set email we need to use the Internet provider (2). There is a dedicated provider for generating phone numbers (3), addresses (4), and person names (5). You can see a full list of available providers here. After creating test data, we can run the test that adds a new Person verified on the server side.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class PersonsControllerTests {

   @Autowired
   private TestRestTemplate restTemplate;

   @Test
   void add() {
      Faker faker = new Faker(Locale.of("pl")); // (1)
      Contact contact = new Contact();
      contact.setEmail(faker.internet().emailAddress()); // (2)
      contact.setPhoneNumber(faker.phoneNumber().cellPhone()); // (3)
      Address address = new Address();
      address.setCity(faker.address().city()); // (4)
      address.setCountry(faker.address().country());
      address.setStreet(faker.address().streetName());
      int number = Integer
         .parseInt(faker.address().streetAddressNumber());
      address.setHouseNumber(number);
      number = Integer.parseInt(faker.address().buildingNumber());
      address.setFlatNumber(number);
      Person person = new Person();
      person.setName(faker.name().fullName()); // (5)
      person.setContact(contact);
      person.setAddress(address);
      person.setGender(Gender.valueOf(
         faker.gender().binaryTypes().toUpperCase()));
      person.setAge(faker.number().numberBetween(18, 65));

      person = restTemplate
         .postForObject("/persons", person, Person.class);
      Assertions.assertNotNull(person);
      Assertions.assertNotNull(person.getId());
   }

}

Here’s the data generated during my test. I think you can find one inconsistency here (the country field) 😉

Person(id=null, name=Stefania Borkowski, age=51, gender=FEMALE, address=Address(country=Ekwador, city=Sępopol, street=al. Chudzik, houseNumber=882, flatNumber=318), contact=Contact{email='gilbert.augustyniak@gmail.com', phoneNumber='69-733-43-77'})

Sometimes you need to generate a more predictable random result. It’s possible to provide a seed value in the Faker constructor. When providing a seed, the instantiation of Fake objects will always happen in a predictable way, which can be handy for generating results multiple times. Here’s a new version of my Faker object declaration:

Faker faker = new Faker(Locale.of("pl"), new Random(0));

JPA Streamer

Our next library is related to JPA queries. If you like to use Java streams and you are building apps that interact with databases through JPA or Hibernate, the JPA Streamer library may be an interesting choice. It is a library for expressing JPA/Hibernate/Spring queries using standard Java streams. JPA Streamer instantly gives Java developers type-safe, expressive and intuitive means of obtaining data in database applications. Moreover, you can easily integrate it with Spring Boot and Quarkus. Firstly, let’s include JPA Streamer in our dependencies:

<dependency>
  <groupId>com.speedment.jpastreamer</groupId>
  <artifactId>jpastreamer-core</artifactId>
  <version>1.1.2</version>
</dependency>

If you want to integrate it with Spring Boot you need to add one additional dependency:

<dependency>
  <groupId>com.speedment.jpastreamer.integration.spring</groupId>
  <artifactId>spring-boot-jpastreamer-autoconfigure</artifactId>
  <version>1.1.2</version>
</dependency>

In order to test JPA Streamer, we need to create an example entities model.

@Entity
public class Employee {

   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   private String position;
   private int salary;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;

   // getters and setters ...

}

There are also two other entities: Organization and Department. Here are their definitions:

@Entity
public class Department {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;

   // getters and setters ...
}

@Entity
public class Organization {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "organization")
   private Set<Department> departments;
   @OneToMany(mappedBy = "organization")
   private Set<Employee> employees;

   // getters and setters ...
}

Now, we can prepare some queries using the Java streams pattern. In the following fragment of code, we are searching an entity by id and then joining two relations. By default, it is LEFT JOIN, but we can customize it when calling the joining() method. In the following fragment of code, we join Department and Organization, which are in @ManyToOne a relationship with the Employee entity. Then we filter the result, convert the object to the DTO and pick the first result.

@GetMapping("/{id}")
public EmployeeWithDetailsDTO findById(@PathVariable("id") Integer id) {
   return streamer.stream(of(Employee.class)
           .joining(Employee$.department)
           .joining(Employee$.organization))
        .filter(Employee$.id.equal(id))
        .map(EmployeeWithDetailsDTO::new)
        .findFirst()
        .orElseThrow();
}

Of course, we can call many other Java stream methods. In the following fragment of code, we count the number of employees assigned to the particular department.

@GetMapping("/{id}/count-employees")
public long getNumberOfEmployees(@PathVariable("id") Integer id) {
   return streamer.stream(Department.class)
         .filter(Department$.id.equal(id))
         .map(Department::getEmployees)
         .mapToLong(Set::size)
         .sum();
}

If you are looking for a detailed explanation and more examples with JPA Streamer you can my article dedicated to that topic.

Blaze Persistence

Blaze Persistence is another library from the JPA and Hibernate area. It allows you to write complex queries with a consistent builder API with rich Criteria API for JPA providers. That’s not all. You can also use the Entity-View module dedicated to DTO mapping. Of course, you can easily integrate with Spring Boot or Quarkus. If you want to use all Blaze Persistence modules in your app it is worth adding the dependencyManagement section in your Maven pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.blazebit</groupId>
            <artifactId>blaze-persistence-bom</artifactId>
            <version>1.6.8</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>    
    </dependencies>
</dependencyManagement>

Personally, I’m using Blaze Persistence for DTO mapping. Thanks to the integration with Spring Boot we can replace Spring Data Projections with Blaze Persistence Entity-Views. It will be especially useful for more advanced mappings since Blaze Persistence offers more features and better performance for that. You can read a detailed comparison in the following article. If we want to integrate Blaze Persistence Entity-Views with Spring Data we should add the following dependencies:

<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-spring-data-2.7</artifactId>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-hibernate-5.6</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-entity-view-processor</artifactId>
</dependency>

Then, we need to create an interface with getters for mapped fields. It should be annotated with the @EntityView that refers to the target entity class. In the following example, we are mapping two entity fields firstName and lastName to the single fields inside the PersonDTO object. In order to map the entity’s primary key we should use the @IdMapping annotation.

@EntityView(Person.class)
public interface PersonView {

   @IdMapping
   Integer getId();
   void setId(Integer id);

   @Mapping("CONCAT(firstName,' ',lastName)")
   String getName();
   void setName(String name);

}

We can still take advantage of the Spring Data repository pattern. Our repository interface needs to extend the EntityViewRepository interface.

@Transactional(readOnly = true)
public interface PersonViewRepository 
    extends EntityViewRepository<PersonView, Integer> {

    PersonView findByAgeGreaterThan(int age);

}

We also need to provide some additional configuration and enable Blaze Persistence in the main or the configuration class:

@SpringBootApplication
@EnableBlazeRepositories
@EnableEntityViews
public class PersonApplication {

   public static void main(String[] args) {
      SpringApplication.run(PersonApplication.class, args);
   }

}

Hoverfly

Finally, the last of the Java libraries on my list – Hoverfly. To be more precise, we will use the Java version of the Hoverfly library documented here. It is a lightweight service virtualization tool that allows you to stub or simulate HTTP(S) services. Hoverfly Java is a native language binding that gives you an expressive API for managing Hoverfly in Java. It gives you a Hoverfly class which abstracts away the binary and API calls, a DSL for creating simulations, and a JUnit integration for using it within unit tests.

Ok, there are some other, similar libraries… but somehow I really like Hoverfly 🙂 It is a simple, lightweight library that may perform tests in different modes like simulation, spying, capture, or diffing. You can use Java DSL to build request matchers to response mappings. Let’s include the latest version of Hoverfly in the Maven dependencies:

<dependency>
  <groupId>io.specto</groupId>
  <artifactId>hoverfly-java-junit5</artifactId>
  <version>0.14.3</version>
</dependency>

Let’s assume we have the following method in our Spring @RestController. Before returning a ping response for itself, it calls another service under the address http://callme-service:8080/callme/ping.

@GetMapping("/ping")
public String ping() {
   String response = restTemplate
     .getForObject("http://callme-service:8080/callme/ping", 
                   String.class);
   LOGGER.info("Calling: response={}", response);
   return "I'm caller-service " + version + ". Calling... " + response;
}

Now, we will create the test for our controller. In order to use Hoverfly to intercept outgoing traffic we register HoverflyExtension (1). Then we may the Hoverfly object to create a request mather and simulate an HTTP response (2). The simulated response body is I'm callme-service v1.

@SpringBootTest(properties = {"VERSION = v2"}, 
    webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class) // (1)
public class CallerCallmeTest {
    
   @Autowired
   TestRestTemplate restTemplate;

   @Test
   void callmeIntegration(Hoverfly hoverfly) {
      hoverfly.simulate(
            dsl(service("http://callme-service:8080")
               .get("/callme/ping")
               .willReturn(success().body("I'm callme-service v1.")))
      ); // (2)
      String response = restTemplate
         .getForObject("/caller/ping", String.class);
      assertEquals("I'm caller-service v2. Calling... I'm callme-service v1.", response);
   }
}

We can easily customize Hovefly behavior with the @HoverflyConfig annotation. By default, Hoverfly works in proxy mode. Assuming we want it to act as a web server we need to set the property webserver to true (1). After that, it will listen for requests on localhost and the port indicated by the proxyPort property. In the next step, we will also enable Spring Cloud @LoadBalancedClient to configure a static list of target URLs instead of dynamic discovery (2). Finally, we can create a Hoverfly test. This time we are intercepting traffic from the web server listening on the localhost:8080 (3).

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.RANDOM_PORT)
@HoverflyCore(config = 
   @HoverflyConfig(logLevel = LogLevel.DEBUG, 
                    webServer = true, 
                    proxyPort = 8080)) // (1)
@ExtendWith(HoverflyExtension.class)
@LoadBalancerClient(name = "account-service", 
                    configuration = AccountServiceConf.class) // (2)
public class GatewayTests {

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    public void findAccounts(Hoverfly hoverfly) {
        hoverfly.simulate(dsl(
            service("http://localhost:8080")
                .andDelay(200, TimeUnit.MILLISECONDS).forAll()
                .get(any())
                .willReturn(success("[{\"id\":\"1\",\"number\":\"1234567890\",\"balance\":5000}]", "application/json")))); // (3)

        ResponseEntity<String> response = restTemplate
                .getForEntity("/account/1", String.class);
        Assertions.assertEquals(200, response.getStatusCodeValue());
        Assertions.assertNotNull(response.getBody());
    }
}

Here’s the load balancer client configuration created just for the test purpose.

class AccountServiceInstanceListSuppler implements 
    ServiceInstanceListSupplier {

    private final String serviceId;

    AccountServiceInstanceListSuppler(String serviceId) {
        this.serviceId = serviceId;
    }

    @Override
    public String getServiceId() {
        return serviceId;
    }

    @Override
    public Flux<List<ServiceInstance>> get() {
        return Flux.just(Arrays
                .asList(new DefaultServiceInstance(serviceId + "1", 
                        serviceId, 
                        "localhost", 8080, false)));
    }
}

Final Thoughts

As you probably figured out, I used all that Java libraries with Spring Boot apps. Although Spring Boot comes with a defined set of external libraries, sometimes we may need some add-ons. The Java libraries I presented are usually created to solve a single, particular problem like e.g. test data generation. It’s totally fine from my perspective. I hope you will find at least one position from my list useful in your projects.

The post Useful & Unknown Java Libraries appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/30/useful-unknown-java-libraries/feed/ 6 13954
An Advanced GraphQL with Spring Boot https://piotrminkowski.com/2023/01/18/an-advanced-graphql-with-spring-boot/ https://piotrminkowski.com/2023/01/18/an-advanced-graphql-with-spring-boot/#comments Wed, 18 Jan 2023 11:15:39 +0000 https://piotrminkowski.com/?p=13938 In this article, you will learn how to use Spring for GraphQL in your Spring Boot app. Spring for GraphQL is a relatively new project. The 1.0 version was released a few months ago. Before that release, we had to include third-party libraries to simplify GraphQL implementation in the Spring Boot app. I have already […]

The post An Advanced GraphQL with Spring Boot appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Spring for GraphQL in your Spring Boot app. Spring for GraphQL is a relatively new project. The 1.0 version was released a few months ago. Before that release, we had to include third-party libraries to simplify GraphQL implementation in the Spring Boot app. I have already described two alternative solutions in my previous articles. In the following article, you will learn about the GraphQL Java Kickstart project. In the other article, you can see how to create some more advanced GraphQL queries with the Netflix DGS library.

We will use a very similar schema and entity model as in those two articles about Spring Boot and GraphQL.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Firstly, you should go to the sample-app-spring-graphql directory. Our sample Spring Boot exposes API over GraphQL and connects to the in-memory H2 database. It uses Spring Data JPA as a layer to interact with the database. There are three entities Employee, Department and Organization. Each of them is stored in a separate table. Here’s a relationship model.

Getting started with Spring for GraphQL

In addition to the standard Spring Boot modules we need to include the following two dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-graphql</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.graphql</groupId>
  <artifactId>spring-graphql-test</artifactId>
  <scope>test</scope>
</dependency>

The spring-graph-test provides additional capabilities for building unit tests. The starter comes with required libraries and auto-configuration. However, it does not enable the GraphiQL interface. In order to enable it, we should set the following property in the application.yml file:

spring:
  graphql:
    graphiql:
      enabled: true

By default, Spring for GraphQL tries to load schema files from the src/main/resources/graphql directory. It looks there for the files with the .graphqls or .gqls extensions. Let’s GraphQL schema for the Department entity. The Department type references the two other types: Organization and Employee (the list of employees). There are two queries for searching all departments and a department by id, and a single mutation for adding a new department.

type Query {
   departments: [Department]
   department(id: ID!): Department!
}

type Mutation {
   newDepartment(department: DepartmentInput!): Department
}

input DepartmentInput {
   name: String!
   organizationId: Int
}

type Department {
   id: ID!
   name: String!
   organization: Organization
   employees: [Employee]
}

The Organization type schema is pretty similar. From the more advanced stuff, we need to handle joins to the Employee and Department types.

extend type Query {
   organizations: [Organization]
   organization(id: ID!): Organization!
}

extend type Mutation {
   newOrganization(organization: OrganizationInput!): Organization
}

input OrganizationInput {
   name: String!
}

type Organization {
   id: ID!
   name: String!
   employees: [Employee]
   departments: [Department]
}

And the last schema – for the Employee type. Unlike the previous schemas, it defines the type responsible for handling filtering. The EmployeeFilter is able to filter by salary, position, or age. There is also the query method for handling filtering – employeesWithFilter.

extend type Query {
  employees: [Employee]
  employeesWithFilter(filter: EmployeeFilter): [Employee]
  employee(id: ID!): Employee!
}

extend type Mutation {
  newEmployee(employee: EmployeeInput!): Employee
}

input EmployeeInput {
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  organizationId: Int!
  departmentId: Int!
}

type Employee {
  id: ID!
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  department: Department
  organization: Organization
}

input EmployeeFilter {
  salary: FilterField
  age: FilterField
  position: FilterField
}

input FilterField {
  operator: String!
  value: String!
}

Create Entities

Do not hold that against me, but I’m using Lombok in entity implementation. Here’s the Employee entity corresponding to the Employee type defined in GraphQL schema.

@Entity
@Data
@NoArgsConstructor
@AllArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  @EqualsAndHashCode.Include
  private Integer id;
  private String firstName;
  private String lastName;
  private String position;
  private int salary;
  private int age;
  @ManyToOne(fetch = FetchType.LAZY)
  private Department department;
  @ManyToOne(fetch = FetchType.LAZY)
  private Organization organization;
}

Here we have the Department entity.

@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  @EqualsAndHashCode.Include
  private Integer id;
  private String name;
  @OneToMany(mappedBy = "department")
  private Set<Employee> employees;
  @ManyToOne(fetch = FetchType.LAZY)
  private Organization organization;
}

Finally, we can take a look at the Organization entity.

@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Organization {
  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  @EqualsAndHashCode.Include
  private Integer id;
  private String name;
  @OneToMany(mappedBy = "organization")
  private Set<Department> departments;
  @OneToMany(mappedBy = "organization")
  private Set<Employee> employees;
}

Using GraphQL for Spring with Spring Boot

Spring for GraphQL provides an annotation-based programming model using the well-known @Controller pattern. It is also possible to adapt the Querydsl library and use it together with Spring Data JPA. You can then use it in your Spring Data repositories annotated with @GraphQLRepository. In this article, I will use the standard JPA Criteria API for generating more advanced queries with filters and joins.

Let’s start with our first controller. In comparison to both previous articles about Netflix DGS and GraphQL Java Kickstart, we will keep queries and mutations in the same class. We need to annotate query methods with the @QueryMapping, and mutation methods with @MutationMapping. The last query method employeesWithFilter performs advanced filtering based on the dynamic list of fields passed in the input EmployeeFilter type. To pass an input parameter we should annotate the method argument with @Argument.

@Controller
public class EmployeeController {

   DepartmentRepository departmentRepository;
   EmployeeRepository employeeRepository;
   OrganizationRepository organizationRepository;

   EmployeeController(DepartmentRepository departmentRepository,
                      EmployeeRepository employeeRepository, 
                      OrganizationRepository organizationRepository) {
      this.departmentRepository = departmentRepository;
      this.employeeRepository = employeeRepository;
      this.organizationRepository = organizationRepository;
   }

   @QueryMapping
   public Iterable<Employee> employees() {
       return employeeRepository.findAll();
   }

   @QueryMapping
   public Employee employee(@Argument Integer id) {
       return employeeRepository.findById(id).orElseThrow();
   }

   @MutationMapping
   public Employee newEmployee(@Argument EmployeeInput employee) {
      Department department = departmentRepository
         .findById(employee.getDepartmentId()).get();
      Organization organization = organizationRepository
         .findById(employee.getOrganizationId()).get();
      return employeeRepository.save(new Employee(null, employee.getFirstName(), employee.getLastName(),
                employee.getPosition(), employee.getAge(), employee.getSalary(),
                department, organization));
   }

   @QueryMapping
   public Iterable<Employee> employeesWithFilter(
         @Argument EmployeeFilter filter) {
      Specification<Employee> spec = null;
      if (filter.getSalary() != null)
         spec = bySalary(filter.getSalary());
      if (filter.getAge() != null)
         spec = (spec == null ? byAge(filter.getAge()) : spec.and(byAge(filter.getAge())));
      if (filter.getPosition() != null)
         spec = (spec == null ? byPosition(filter.getPosition()) :
                    spec.and(byPosition(filter.getPosition())));
      if (spec != null)
         return employeeRepository.findAll(spec);
      else
         return employeeRepository.findAll();
   }

   private Specification<Employee> bySalary(FilterField filterField) {
      return (root, query, builder) -> filterField
         .generateCriteria(builder, root.get("salary"));
   }

   private Specification<Employee> byAge(FilterField filterField) {
      return (root, query, builder) -> filterField
         .generateCriteria(builder, root.get("age"));
   }

   private Specification<Employee> byPosition(FilterField filterField) {
      return (root, query, builder) -> filterField
         .generateCriteria(builder, root.get("position"));
   }
}

Here’s our JPA repository implementation. In order to use JPA Criteria API we need it needs to extend the JpaSpecificationExecutor interface. The same rule applies to both others DepartmentRepository and OrganizationRepository.

public interface EmployeeRepository extends 
   CrudRepository<Employee, Integer>, JpaSpecificationExecutor<Employee> {
}

Now, let’s switch to another controller. Here’s the implementation of DepartmentController. It shows the example of relationship fetching. We use DataFetchingEnvironment to detect if the input query contains a relationship field. In our case, it may be employees or organization. If any of those fields is defined we add the particular relation to the JOIN statement. The same approach applies to both department and deparments methods

@Controller
public class DepartmentController {

   DepartmentRepository departmentRepository;
   OrganizationRepository organizationRepository;

   DepartmentController(DepartmentRepository departmentRepository, OrganizationRepository organizationRepository) {
      this.departmentRepository = departmentRepository;
      this.organizationRepository = organizationRepository;
   }

   @MutationMapping
   public Department newDepartment(@Argument DepartmentInput department) {
      Organization organization = organizationRepository
         .findById(department.getOrganizationId()).get();
      return departmentRepository.save(new Department(null, department.getName(), null, organization));
   }

   @QueryMapping
   public Iterable<Department> departments(DataFetchingEnvironment environment) {
      DataFetchingFieldSelectionSet s = environment.getSelectionSet();
      List<Specification<Department>> specifications = new ArrayList<>();
      if (s.contains("employees") && !s.contains("organization"))
         return departmentRepository.findAll(fetchEmployees());
      else if (!s.contains("employees") && s.contains("organization"))
         return departmentRepository.findAll(fetchOrganization());
      else if (s.contains("employees") && s.contains("organization"))
         return departmentRepository.findAll(fetchEmployees().and(fetchOrganization()));
      else
         return departmentRepository.findAll();
   }

   @QueryMapping
   public Department department(@Argument Integer id, DataFetchingEnvironment environment) {
      Specification<Department> spec = byId(id);
      DataFetchingFieldSelectionSet selectionSet = environment
         .getSelectionSet();
      if (selectionSet.contains("employees"))
         spec = spec.and(fetchEmployees());
      if (selectionSet.contains("organization"))
         spec = spec.and(fetchOrganization());
      return departmentRepository.findOne(spec).orElseThrow(NoSuchElementException::new);
   }

    private Specification<Department> fetchOrganization() {
        return (root, query, builder) -> {
            Fetch<Department, Organization> f = root
               .fetch("organization", JoinType.LEFT);
            Join<Department, Organization> join = (Join<Department, Organization>) f;
            return join.getOn();
        };
    }

   private Specification<Department> fetchEmployees() {
      return (root, query, builder) -> {
         Fetch<Department, Employee> f = root
            .fetch("employees", JoinType.LEFT);
         Join<Department, Employee> join = (Join<Department, Employee>) f;
         return join.getOn();
      };
   }

   private Specification<Department> byId(Integer id) {
      return (root, query, builder) -> builder.equal(root.get("id"), id);
   }
}

Here’s the OrganizationController implementation.

@Controller
public class OrganizationController {

   OrganizationRepository repository;

   OrganizationController(OrganizationRepository repository) {
      this.repository = repository;
   }

   @MutationMapping
   public Organization newOrganization(@Argument OrganizationInput organization) {
      return repository.save(new Organization(null, organization.getName(), null, null));
   }

   @QueryMapping
   public Iterable<Organization> organizations() {
      return repository.findAll();
   }

   @QueryMapping
   public Organization organization(@Argument Integer id, DataFetchingEnvironment environment) {
      Specification<Organization> spec = byId(id);
      DataFetchingFieldSelectionSet selectionSet = environment
         .getSelectionSet();
      if (selectionSet.contains("employees"))
         spec = spec.and(fetchEmployees());
      if (selectionSet.contains("departments"))
         spec = spec.and(fetchDepartments());
      return repository.findOne(spec).orElseThrow();
   }

   private Specification<Organization> fetchDepartments() {
      return (root, query, builder) -> {
         Fetch<Organization, Department> f = root
            .fetch("departments", JoinType.LEFT);
         Join<Organization, Department> join = (Join<Organization, Department>) f;
         return join.getOn();
      };
   }

   private Specification<Organization> fetchEmployees() {
      return (root, query, builder) -> {
         Fetch<Organization, Employee> f = root
            .fetch("employees", JoinType.LEFT);
         Join<Organization, Employee> join = (Join<Organization, Employee>) f;
         return join.getOn();
      };
   }

   private Specification<Organization> byId(Integer id) {
      return (root, query, builder) -> builder.equal(root.get("id"), id);
   }
}

Create Unit Tests

Once we created the whole logic it’s time to test it. In the next section, I’ll show you how to use GraphiQL IDE for that. Here, we are going to focus on unit tests. The simplest way to start with Spring for GraphQL tests is through the GraphQLTester bean. We can use in mocked web environment. You can also build tests for HTTP layer with another bean – HttpGraphQlTester. However, it requires us to provide an instance of WebTestClient.

Here are the test for the Employee @Controller. Each time we are building an inline query using the GraphQL notation. We need to annotate the whole test class with @AutoConfigureGraphQlTester. Then we can use the DSL API provided by the GraphQLTester to get and verify data from backend. Besides two simple tests we also verifies if EmployeeFilter works fine in the findWithFilter method.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.MOCK)
@AutoConfigureGraphQlTester
public class EmployeeControllerTests {

   @Autowired
   private GraphQlTester tester;

   @Test
   void addEmployee() {
      String query = "mutation { newEmployee(employee: { firstName: \"John\" lastName: \"Wick\" position: \"developer\" salary: 10000 age: 20 departmentId: 1 organizationId: 1}) { id } }";
      Employee employee = tester.document(query)
              .execute()
              .path("data.newEmployee")
              .entity(Employee.class)
              .get();
      Assertions.assertNotNull(employee);
      Assertions.assertNotNull(employee.getId());
   }

   @Test
   void findAll() {
      String query = "{ employees { id firstName lastName salary } }";
      List<Employee> employees = tester.document(query)
             .execute()
             .path("data.employees[*]")
             .entityList(Employee.class)
             .get();
      Assertions.assertTrue(employees.size() > 0);
      Assertions.assertNotNull(employees.get(0).getId());
      Assertions.assertNotNull(employees.get(0).getFirstName());
   }

   @Test
   void findById() {
      String query = "{ employee(id: 1) { id firstName lastName salary } }";
      Employee employee = tester.document(query)
             .execute()
             .path("data.employee")
             .entity(Employee.class)
             .get();
      Assertions.assertNotNull(employee);
      Assertions.assertNotNull(employee.getId());
      Assertions.assertNotNull(employee.getFirstName());
   }

   @Test
   void findWithFilter() {
      String query = "{ employeesWithFilter(filter: { salary: { operator: \"gt\" value: \"12000\" } }) { id firstName lastName salary } }";
      List<Employee> employees = tester.document(query)
             .execute()
             .path("data.employeesWithFilter[*]")
             .entityList(Employee.class)
             .get();
      Assertions.assertTrue(employees.size() > 0);
      Assertions.assertNotNull(employees.get(0).getId());
      Assertions.assertNotNull(employees.get(0).getFirstName());
   }
}

Test tests for the Deparment type are very similar. Additionally, we are testing join statements in the findById test method by declaring the organization field in the query.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.MOCK)
@AutoConfigureGraphQlTester
public class DepartmentControllerTests {

   @Autowired
   private GraphQlTester tester;

   @Test
   void addDepartment() {
      String query = "mutation { newDepartment(department: { name: \"Test10\" organizationId: 1}) { id } }";
      Department department = tester.document(query)
             .execute()
             .path("data.newDepartment")
             .entity(Department.class)
             .get();
      Assertions.assertNotNull(department);
      Assertions.assertNotNull(department.getId());
   }

   @Test
   void findAll() {
      String query = "{ departments { id name } }";
      List<Department> departments = tester.document(query)
             .execute()
             .path("data.departments[*]")
             .entityList(Department.class)
             .get();
      Assertions.assertTrue(departments.size() > 0);
      Assertions.assertNotNull(departments.get(0).getId());
      Assertions.assertNotNull(departments.get(0).getName());
   }

   @Test
   void findById() {
      String query = "{ department(id: 1) { id name organization { id } } }";
      Department department = tester.document(query)
             .execute()
             .path("data.department")
             .entity(Department.class)
             .get();
      Assertions.assertNotNull(department);
      Assertions.assertNotNull(department.getId());
      Assertions.assertNotNull(department.getOrganization());
      Assertions.assertNotNull(department.getOrganization().getId());
   }
    
}

Each time you are cloning my repository you can be sure that the examples work fine thanks to automated tests. You can always verify the status of the repository build in my CircleCI pipeline.

Testing with GraphiQL

We can easily start the application with the following Maven command:

$ mvn clean spring-boot:run

Once you do it, you can access the GraphiQL tool under the address http://localhost:8080/graphiql. The app inserts some demo data to the H2 database on startup. GraphiQL provides content assist for building GraphQL queries. Here’s the sample query tested there.

Final Thoughts

Spring for GraphQL is very interesting project and I will be following its development closely. Besides @Controller support, I tried to use the querydsl integration with Spring Data JPA repositories. However, I’ve got some problems with it, and therefore I avoided to place that topic in the article. Currently, Spring for GraphQL is the third solid Java framework with high-level GraphQL support for Spring Boot. My choice is still Netflix DGS, but Spring for GraphQL is rather during the active development. So we can probably except some new and useful features soon.

The post An Advanced GraphQL with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/18/an-advanced-graphql-with-spring-boot/feed/ 3 13938
Express JPA Queries as Java Streams https://piotrminkowski.com/2021/07/13/express-jpa-queries-as-java-streams/ https://piotrminkowski.com/2021/07/13/express-jpa-queries-as-java-streams/#comments Tue, 13 Jul 2021 11:00:11 +0000 https://piotrminkowski.com/?p=9949 In this article, you will learn how to use the JPAstreamer library to express your JPA queries with Java streams. I will also show you how to integrate this library with Spring Boot and Spring Data. The idea around it is very simple but at the same time brilliant. The library creates a SQL query […]

The post Express JPA Queries as Java Streams appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the JPAstreamer library to express your JPA queries with Java streams. I will also show you how to integrate this library with Spring Boot and Spring Data. The idea around it is very simple but at the same time brilliant. The library creates a SQL query based on your Java stream. That’s all. I have already mentioned this library on my Twitter account.

jpa-java-streams-twitter

Before we start, let’s take a look at the following picture. It should explain the concept in a simple way. That’s pretty intuitive, right?

jpa-java-streams-table

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions. Let’s begin.

Dependencies and configuration

As an example, we have a simple Spring Boot application that runs an embedded H2 database and exposes data through a REST API. It also uses Spring Data JPA to interact with the database. But with the JPAstreamer library, this is completely transparent for us. So, in the first step, we need to include the following two dependencies. The first of them adds JPAstreamer while the second integrate it with Spring Boot.

<dependency>
  <groupId>com.speedment.jpastreamer</groupId>
  <artifactId>jpastreamer-core</artifactId>
  <version>1.0.1</version>
</dependency>
<dependency>
  <groupId>com.speedment.jpastreamer.integration.spring</groupId>
  <artifactId>spring-boot-jpastreamer-autoconfigure</artifactId>
  <version>1.0.1</version>
</dependency>

Then, we need to add Spring Boot Web and JPA starters, H2 database, and optionally Lombok.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>com.h2database</groupId>
  <artifactId>h2</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>org.projectlombok</groupId>
  <artifactId>lombok</artifactId>
  <version>1.18.20</version>
</dependency>

I’m using Java 15 for compilation. Because I use Java records for DTO I need to enable preview features. Here’s the plugin responsible for it.

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.8.1</version>
  <configuration>
    <release>15</release>
    <compilerArgs>
        --enable-preview
    </compilerArgs>
    <source>15</source>
    <target>15</target>
  </configuration>
</plugin>

The JPAstreamer library generates source code based on your entity model. Then we may use it, for example, to perform filtering or sorting. But we will talk about it in the next part of the article. For now, let’s configure the build process with build-helper-maven-plugin. It generates the source code in the target/generated-sources/annotations directory. If you use IntelliJ it is automatically included as a source folder in your project.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>3.2.0</version>
  <executions>
    <execution>
      <phase>generate-sources</phase>
      <goals>
        <goal>add-source</goal>
      </goals>
      <configuration>
        <sources>
          <source>${project.build.directory}/generated-sources/annotations</source>
        </sources>
      </configuration>
    </execution>
  </executions>
</plugin>

Here is a source code generated by JPAstreamer for our entity model.

Entity model for JPA

Let’s take a look at our example entities. Here’s the Employee class. Each employee is assigned to the department and organization.

@Entity
@NoArgsConstructor
@Getter
@Setter
@ToString
public class Employee {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   private String position;
   private int salary;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Here’s the Department entity.

@Entity
@NoArgsConstructor
@Getter
@Setter
public class Department {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Here’s the Organization entity.

@Entity
@NoArgsConstructor
@Getter
@Setter
public class Organization {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "organization")
   private Set<Department> departments;
   @OneToMany(mappedBy = "organization")
   private Set<Employee> employees;
}

We will also use Java records to create DTOs. Here’s a simple DTO for the Employee entity.

public record EmployeeDTO(
   Integer id,
   String name,
   String position,
   int salary
) {
   public EmployeeDTO(Employee emp) {
      this(emp.getId(), emp.getName(), emp.getPosition(), emp.getSalary());
   }
}

We also have a DTO record to express relationship fields.

public record EmployeeWithDetailsDTO(
   Integer id,
   String name,
   String position,
   int salary,
   String organizationName,
   String departmentName
) {
   public EmployeeWithDetailsDTO(Employee emp) {
      this(emp.getId(), emp.getName(), emp.getPosition(), emp.getSalary(),
            emp.getOrganization().getName(),
            emp.getDepartment().getName());
   }
}

Express JPA queries as Java streams

Let’s begin with a simple example. We would like to get all the departments, sort it ascending based on the name field, and then convert it to DTO. We just need to get an instance of JPAstreamer object and invoke a stream() method. Then you do everything else as you would act with standard Java streams.

@GetMapping
public List<DepartmentDTO> findAll() {
   return streamer.stream(Department.class)
        .sorted(Department$.name)
        .map(DepartmentDTO::new)
        .collect(Collectors.toList());
}

Now, we can call the endpoint after starting our Spring Boot application.

$ curl http://localhost:8080/departments
[{"id":4,"name":"aaa"},{"id":3,"name":"bbb"},{"id":2,"name":"ccc"},{"id":1,"name":"ddd"}]

Let’s take a look at something a little bit more advanced. We are going to find employees with salaries greater than an input value, sort them by salaries, and of course map to DTO.

@GetMapping("/greater-than/{salary}")
public List<EmployeeDTO> findBySalaryGreaterThan(@PathVariable("salary") int salary) {
   return streamer.stream(Employee.class)
         .filter(Employee$.salary.greaterThan(salary))
         .sorted(Employee$.salary)
         .map(EmployeeDTO::new)
         .collect(Collectors.toList());
}

Then, we call another endpoint once again.

$ curl http://localhost:8080/employees/greater-than/25000    
[{"id":5,"name":"Test5","position":"Architect","salary":30000},{"id":7,"name":"Test7","position":"Manager","salary":30000},{"id":9,"name":"Test9","position":"Developer","salary":30000}]

We can also perform JPA pagination operations by using skip and limit Java streams methods.

@GetMapping("/offset/{offset}/limit/{limit}")
public List<EmployeeDTO> findAllWithPagination(
      @PathVariable("offset") int offset, 
      @PathVariable("limit") int limit) {
   return streamer.stream(Employee.class)
         .skip(offset)
         .limit(limit)
         .map(EmployeeDTO::new)
         .collect(Collectors.toList());
}

What is important all such operations are performed on the database side. Here’s the SQL query generated for the implementation visible above.

What about relationships between entities? Of course relationships between tables are handled via the JPA provider. In order to perform the JOIN operation with JPAstreamer, we just need to specify the joining stream. By default, it is LEFT JOIN, but we can customize it when calling the joining() method. In the following fragment of code, we join Department and Organization, which are in @ManyToOne relationship with the Employee entity.

@GetMapping("/{id}")
public EmployeeWithDetailsDTO findById(@PathVariable("id") Integer id) {
   return streamer.stream(of(Employee.class)
           .joining(Employee$.department)
           .joining(Employee$.organization))
        .filter(Employee$.id.equal(id))
        .map(EmployeeWithDetailsDTO::new)
        .findFirst()
        .orElseThrow();
}

Of course, we can call many other Java stream methods. In the following fragment of code, we count the number of employees assigned to the particular department.

@GetMapping("/{id}/count-employees")
public long getNumberOfEmployees(@PathVariable("id") Integer id) {
   return streamer.stream(Department.class)
         .filter(Department$.id.equal(id))
         .map(Department::getEmployees)
         .mapToLong(Set::size)
         .sum();
}

And the last example today. We get all the employees assigned to a particular department and map each of them to EmployeeDTO.

@GetMapping("/{id}/employees")
public List<EmployeeDTO> getEmployees(@PathVariable("id") Integer id) {
   return streamer.stream(Department.class)
         .filter(Department$.id.equal(id))
         .map(Department::getEmployees)
         .flatMap(Set::stream)
         .map(EmployeeDTO::new)
         .collect(Collectors.toList());
}

Integration with Spring Boot

We can easily integrate JPAstreamer with Spring Boot and Spring Data JPA. In fact, you don’t have anything more than just include a dependency responsible for integration with Spring. It provides auto-configuration also for Spring Data JPA. Therefore, we just need to inject the JPAStreamer bean into the target service or controller.

@RestController
@RequestMapping("/employees")
public class EmployeeController {

   private final JPAStreamer streamer;

   public EmployeeController(JPAStreamer streamer) {
      this.streamer = streamer;
   }

   @GetMapping("/greater-than/{salary}")
   public List<EmployeeDTO> findBySalaryGreaterThan(
      @PathVariable("salary") int salary) {
   return streamer.stream(Employee.class)
        .filter(Employee$.salary.greaterThan(salary))
        .sorted(Employee$.salary)
        .map(EmployeeDTO::new)
        .collect(Collectors.toList());
   }

   // ...

}

Final Thoughts

The concept around the JPAstreamer library seems to be very interesting. I really like it. The only disadvantage I found is that it sends certain data back to Speedment’s servers for Google Analytics. If you wish to disable this feature, you need to contact their team. To be honest, I’m concerned a little. But it doesn’t change my point of view, that JPAstreamer is a very useful library. If you are interested in topics related to Java streams and collections you may read my article Using Eclipse Collections.

The post Express JPA Queries as Java Streams appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/13/express-jpa-queries-as-java-streams/feed/ 21 9949
An Advanced GraphQL with Quarkus https://piotrminkowski.com/2021/04/14/advanced-graphql-with-quarkus/ https://piotrminkowski.com/2021/04/14/advanced-graphql-with-quarkus/#comments Wed, 14 Apr 2021 09:46:25 +0000 https://piotrminkowski.com/?p=9672 In this article, you will learn how to create a GraphQL application using the Quarkus framework. Our application will connect to a database, and we will use the Quarkus Panache module as the ORM provider. On the other hand, Quarkus GraphQL support is built on top of the SmallRye GraphQL library. We will discuss some […]

The post An Advanced GraphQL with Quarkus appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create a GraphQL application using the Quarkus framework. Our application will connect to a database, and we will use the Quarkus Panache module as the ORM provider. On the other hand, Quarkus GraphQL support is built on top of the SmallRye GraphQL library. We will discuss some more advanced GraphQL and JPA topics like dynamic filtering or relations fetching.

As an example, I will use the same application as in my previous article about Spring Boot GraphQL support. We will migrate it to Quarkus. Instead of the Netflix DGS library, we will use the already mentioned SmallRye GraphQL module. The next important challenge is to replace the ORM layer based on Spring Data with Quarkus Panache. If you would like to know more about GraphQL on Spring Boot read my article An Advanced GraphQL with Spring Boot and Netflix DGS.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that go to the sample-app-graphql directory. Then you should just follow my instructions.

We use the same schema and entity model as in my previous article about Spring Boot and GraphQL. Our application exposes GraphQL API and connects to H2 in-memory database. There are three entities EmployeeDepartment and Organization – each of them stored in the separated table. Let’s take a look at a visualization of relations between them.

quarkus-graphql-entities

1. Dependencies for Quarkus GraphQL

Let’s start with dependencies. We need to include SmallRye GraphQL, Quarkus Panache, and the io.quarkus:quarkus-jdbc-h2 artifact for running an in-memory database with our application. In order to generate getters and setters, we can include the Lombok library. However, we can also take an advantage of the Quarkus auto-generation support. After extending entity class with PanacheEntityBase Quarkus will also generate getters and setters. We may even extend PanacheEntity to use the default id.

<dependencies>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-smallrye-graphql</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-jdbc-h2</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-hibernate-orm-panache</artifactId>
    </dependency>
    <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
      <version>1.18.16</version>
    </dependency>
</dependencies>

2. Domain Model for GraphQL and Hibernate

In short, Quarkus simplifies the creation of GraphQL APIs. We don’t have to manually define any schemas. The only thing we need to do is create a domain model and use some annotations. First things first – our domain model. To clarify, I’m using the same classes for ORM and API. Of course, we should create DTO objects to expose data as a GraphQL API, but I want to simplify our example implementation as much as I can. Here’s the Employee entity class.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String firstName;
   private String lastName;
   private String position;
   private int salary;
   private int age;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Also, let’s take a look at the Department entity.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Besides entities, we also have input parameters used in the mutations. However, the input objects are much simpler than outputs. Just to compare, here’s the DepartmentInput class.

@Data
@NoArgsConstructor
public class DepartmentInput {
   private String name;
   private Integer organizationId;
}

3. GraphQL Filtering with Quarkus

In this section, we will create a dynamic filter in GraphQL API. Our sample filter allows defining criteria for three different Employee fields: salary, age and position. We may set a single field, two of them or all. Each condition is used with the AND relation to other conditions. The class with a filter implementation is visible below. It consists of several fields represented by the FilterField objects.

@Data
public class EmployeeFilter {
   private FilterField salary;
   private FilterField age;
   private FilterField position;
}

Then, let’s take a look at the FilterField implementation. It has two parameters: operator and value. Basing on the values of these parameters we are generating a JPA Criteria Predicate. I’m just generating the most common conditions for comparison between two numbers or strings.

@Data
public class FilterField {
   private String operator;
   private String value;

   public Predicate generateCriteria(CriteriaBuilder builder, Path field) {
      try {
         int v = Integer.parseInt(value);
         switch (operator) {
         case "lt": return builder.lt(field, v);
         case "le": return builder.le(field, v);
         case "gt": return builder.gt(field, v);
         case "ge": return builder.ge(field, v);
         case "eq": return builder.equal(field, v);
         }
      } catch (NumberFormatException e) {
         switch (operator) {
         case "endsWith": return builder.like(field, "%" + value);
         case "startsWith": return builder.like(field, value + "%");
         case "contains": return builder.like(field, "%" + value + "%");
         case "eq": return builder.equal(field, value);
         }
      }

      return null;
   }
}

After defining model classes we may proceed to the repository implementation. We will use PanacheRepository for that. The idea behind that is quite similar to the Spring Data Repositories. However, we don’t have anything similar to the Spring Data Specification interface which can be used to execute JPA criteria queries. Since we need to build a query basing on dynamic criteria, it would be helpful. Assuming that, we need to inject EntityManager into the repository class and use it directly to obtain JPA CriteriaBuilder. Finally, we are executing a query with criteria and returning a list of employees matching input conditions.

@ApplicationScoped
public class EmployeeRepository implements PanacheRepository<Employee> {

   private EntityManager em;

   public EmployeeRepository(EntityManager em) {
      this.em = em;
   }

   public List<Employee> findByCriteria(EmployeeFilter filter) {
      CriteriaBuilder builder = em.getCriteriaBuilder();
      CriteriaQuery<Employee> criteriaQuery = builder.createQuery(Employee.class);
      Root<Employee> root = criteriaQuery.from(Employee.class);
      Predicate predicate = null;
      if (filter.getSalary() != null)
         predicate = filter.getSalary().generateCriteria(builder, root.get("salary"));
      if (filter.getAge() != null)
         predicate = (predicate == null ?
            filter.getAge().generateCriteria(builder, root.get("age")) :
            builder.and(predicate, filter.getAge().generateCriteria(builder, root.get("age"))));
      if (filter.getPosition() != null)
         predicate = (predicate == null ? filter.getPosition().generateCriteria(builder, root.get("position")) :
            builder.and(predicate, filter.getPosition().generateCriteria(builder, root.get("position"))));

      if (predicate != null)
         criteriaQuery.where(predicate);

      return em.createQuery(criteriaQuery).getResultList();
   }

}

In the last step in this section, we are creating GraphQL resources. In short, all we need to do is to annotate the class with @GraphQLApi and methods with @Query or @Mutation. If you define any input parameter in the method you should annotate it with @Name. The class EmployeeFetcher is responsible just for defining queries. It uses built-in methods provided by PanacheRepository and our custom search method created inside the EmployeeRepository class.

@GraphQLApi
public class EmployeeFetcher {

   private EmployeeRepository repository;

   public EmployeeFetcher(EmployeeRepository repository){
      this.repository = repository;
   }

   @Query("employees")
   public List<Employee> findAll() {
      return repository.listAll();
   }

   @Query("employee")
   public Employee findById(@Name("id") Long id) {
      return repository.findById(id);
   }

   @Query("employeesWithFilter")
   public List<Employee> findWithFilter(@Name("filter") EmployeeFilter filter) {
      return repository.findByCriteria(filter);
   }

}

4. Fetching Relations with Quarkus GraphQL

As you probably figured out, all the JPA relations are configured in a lazy mode. To fetch them we should explicitly set such a request in our GraphQL query. For example, we may query all departments and fetch organization to each of the departments returned on the list. Let’s analyze the request visible below. It contains field organization related to the @ManyToOne relation between Department and Organization entities.

{
  departments {
    id
    name
    organization {
      id
      name
    }
  }
}

How to handle it on the server-side? Firstly, we need to detect the existence of such a relationship field in our GraphQL query. In order to analyze the input query, we can use DataFetchingEnvironment and DataFetchingFieldSelectionSet objects. Then we need to prepare different JPA queries depending on the parameters set in the GraphQL query. Once again, we will use JPA Criteria for that. The same as before, we place implementation responsible for performing a dynamic join inside the repository bean. Also, to obtain DataFetchingEnvironment we first need to inject GraphQL Context bean. With the following DepartmentRepository implementation, we are avoiding possible N+1 problem, and fetching only the required relation.

@ApplicationScoped
public class DepartmentRepository implements PanacheRepository<Department> {

   private EntityManager em;
   private Context context;

   public DepartmentRepository(EntityManager em, Context context) {
      this.em = em;
      this.context = context;
   }

   public List<Department> findAllByCriteria() {
      CriteriaBuilder builder = em.getCriteriaBuilder();
      CriteriaQuery<Department> criteriaQuery = builder.createQuery(Department.class);
      Root<Department> root = criteriaQuery.from(Department.class);
      DataFetchingEnvironment dfe = context.unwrap(DataFetchingEnvironment.class);
      DataFetchingFieldSelectionSet selectionSet = dfe.getSelectionSet();
      if (selectionSet.contains("employees")) {
         root.fetch("employees", JoinType.LEFT);
      }
      if (selectionSet.contains("organization")) {
         root.fetch("organization", JoinType.LEFT);
      }
      criteriaQuery.select(root).distinct(true);
      return em.createQuery(criteriaQuery).getResultList();
   }

   public Department findByIdWithCriteria(Long id) {
      CriteriaBuilder builder = em.getCriteriaBuilder();
      CriteriaQuery<Department> criteriaQuery = builder.createQuery(Department.class);
      Root<Department> root = criteriaQuery.from(Department.class);
      DataFetchingEnvironment dfe = context.unwrap(DataFetchingEnvironment.class);
      DataFetchingFieldSelectionSet selectionSet = dfe.getSelectionSet();
      if (selectionSet.contains("employees")) {
         root.fetch("employees", JoinType.LEFT);
      }
      if (selectionSet.contains("organization")) {
         root.fetch("organization", JoinType.LEFT);
      }
      criteriaQuery.where(builder.equal(root.get("id"), id));
      return em.createQuery(criteriaQuery).getSingleResult();
   }
}

Finally, we just need to create a resource controller. It uses our custom JPA queries defined in DepartmentRepository.

@GraphQLApi
public class DepartmentFetcher {

   private DepartmentRepository repository;

   DepartmentFetcher(DepartmentRepository repository) {
      this.repository = repository;
   }

   @Query("departments")
   public List<Department> findAll() {
      return repository.findAllByCriteria();
   }

   @Query("department")
   public Department findById(@Name("id") Long id) {
      return repository.findByIdWithCriteria(id);
   }

}

5. Handling GraphQL Mutations with Quarkus

In our sample application, we separate the implementation of queries from mutations. So, let’s take a look at the DepartmentMutation class. Instead of @Query we use @Mutation annotation on the method. We also use the DepartmentInput object as a mutation method parameter.

@GraphQLApi
public class DepartmentMutation {

   private DepartmentRepository departmentRepository;
   private OrganizationRepository organizationRepository;

   DepartmentMutation(DepartmentRepository departmentRepository, 
         OrganizationRepository organizationRepository) {
      this.departmentRepository = departmentRepository;
      this.organizationRepository = organizationRepository;
   }

   @Mutation("newDepartment")
   public Department newDepartment(@Name("input") DepartmentInput departmentInput) {
      Organization organization = organizationRepository
         .findById(departmentInput.getOrganizationId());
      Department department = new Department(null, departmentInput.getName(), null, organization);
      departmentRepository.persist(department);
      return department;
   }

}

6. Testing with GraphiQL

Once we finished the implementation we may build and start our Quarkus application using the following Maven command.

$ mvn package quarkus:dev

After startup, the application is available on 8080 port. We may also take a look at the list of included Quarkus modules.

quarkus-graphql-startup

Quarkus automatically generates a GraphQL schema based on the source code. In order to display it, you should invoke the URL http://localhost:8080/graphql/schema.graphql. Of course, it is an optional step. But something that will be pretty useful for us is the GraphiQL tool. It is embedded into the Quarkus application. It allows us to easily interact with GraphQL APIs and can be accessed from http://localhost:8080/graphql-ui/. First, let’s run the following query that tests a filtering feature.

{
  employeesWithFilter(filter: {
    salary: {
      operator: "gt"
      value: "19000"
    },
    age: {
      operator: "gt"
      value: "30"
    }
  }) {
    id
    firstName
    lastName
    position
  }
}

Here’s the SQL query generated by Hibernate for our GraphQL query.

select
   employee0_.id as id1_1_,
   employee0_.age as age2_1_,
   employee0_.department_id as departme7_1_,
   employee0_.firstName as firstnam3_1_,
   employee0_.lastName as lastname4_1_,
   employee0_.organization_id as organiza8_1_,
   employee0_.position as position5_1_,
   employee0_.salary as salary6_1_ 
from
   Employee employee0_ 
where
   employee0_.salary>19000 
   and employee0_.age>30

Our sample application inserts some test data to the H2 database on startup. So, we just need to execute a query using GraphiQL.

Now, let’s repeat the same exercise to test the join feature. Here’s our GraphQL input query responsible for fetching relation with the Organization entity.

{
  department(id: 5) {
    id
    name
    organization {
      id
      name
    }
  }
}

Hibernate generates the following SQL query for that.

select
   department0_.id as id1_0_0_,
   organizati1_.id as id1_2_1_,
   department0_.name as name2_0_0_,
   department0_.organization_id as organiza3_0_0_,
   organizati1_.name as name2_2_1_ 
from
   Department department0_ 
left outer join
   Organization organizati1_ 
      on department0_.organization_id=organizati1_.id 
where
   department0_.id=5

Once again, let’s view the response for our query using GraphiQL.

Final Thoughts

GraphQL support in Quarkus, like several other features, is based on the SmallRye project. In Spring Boot, we can use third-party libraries that provide GraphQL support. One of them is Netflix DGS. There is also a popular Kickstart GraphQL library described in this article. However, we can’t use any default implementation developed by the Spring Team.

With Quarkus GraphQL support we can easily migrate from Spring Boot to Quarkus. I would not say that currently, Quarkus offers many GraphQL features like for example Netflix DGS, but it is still under active development. We could also easily replace Spring Data with the Quarkus Panache project. The lack of some features similar to the Spring Data Specification, could be easily bypassed by using JPA CriteriaBuilder and EntityManager directly. Finally, I really like the Quarkus GraphQL support, because I don’t have to take care of the GraphQL schema creation, which is generated automatically basing on the source code and annotations.

The post An Advanced GraphQL with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/04/14/advanced-graphql-with-quarkus/feed/ 8 9672
An Advanced GraphQL with Spring Boot and Netflix DGS https://piotrminkowski.com/2021/04/08/an-advanced-graphql-with-spring-boot-and-netflix-dgs/ https://piotrminkowski.com/2021/04/08/an-advanced-graphql-with-spring-boot-and-netflix-dgs/#comments Thu, 08 Apr 2021 08:05:27 +0000 https://piotrminkowski.com/?p=9639 In this article, you will learn how to use the Netflix DGS library to simplify GraphQL development with Spring Boot. We will discuss more advanced topics related to GraphQL and databases, like filtering or relationship fetching. I published a similar article some months ago: An Advanced Guide to GraphQL with Spring Boot. However, it is […]

The post An Advanced GraphQL with Spring Boot and Netflix DGS appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the Netflix DGS library to simplify GraphQL development with Spring Boot. We will discuss more advanced topics related to GraphQL and databases, like filtering or relationship fetching. I published a similar article some months ago: An Advanced Guide to GraphQL with Spring Boot. However, it is based on a different library called GraphQL Java Kickstart (https://github.com/graphql-java-kickstart/graphql-spring-boot). Since Netflix DGS has been released some months ago, you might want to take look at it. So, that’s what we will do now.

Netflix DGS is an annotation-based GraphQL Java library built on top of Spring Boot. Consequently, it is dedicated to Spring Boot applications. Besides the annotation-based programming model, it provides several useful features. Netflix DGS allows generating source code from GraphQL schemas. It simplifies writing unit tests and also supports websockets, file uploads, or GraphQL federation. In order to show you the differences between this library and the previously described Kickstart library, I’ll use the same Spring Boot application as before. Let me just briefly describe our scenario.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

First, you should go to the sample-app-netflix-dgs directory. The example with GraphQL Java Kickstart is available inside the sample-app-kickstart directory.

As I mentioned before, we use the same schema and entity model as before. I created an application that exposes API using GraphQL and connects to H2 in-memory database. We will discuss Spring Boot GraphQL JPA support. For integration with the H2 database, I’m using Spring Data JPA and Hibernate. I have implemented three entities EmployeeDepartment and Organization – each of them stored in the separated table. A relationship model between them is visualized in the picture below.

spring-boot-graphql-netflix-domain

1. Dependencies for Spring Boot and Netflix GraphQL

Let’s start with dependencies. We need to include Spring Web, Spring Data JPA, and the com.database:h2 artifact for running an in-memory database with our application. Of course, we also have to include Netflix DGS Spring Boot Starter. Here’s a list of required dependencies in Maven pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>com.h2database</groupId>
   <artifactId>h2</artifactId>
   <scope>runtime</scope>
</dependency>
<dependency>
   <groupId>org.projectlombok</groupId>
   <artifactId>lombok</artifactId>
</dependency>
<dependency>
   <groupId>com.netflix.graphql.dgs</groupId>
   <artifactId>graphql-dgs-spring-boot-starter</artifactId>
   <version>${netflix-dgs.spring.version}</version>
</dependency>

2. GraphQL schemas

Before we start implementation, we need to create GraphQL schemas with objects, queries, and mutations. A schema may be defined in multiple graphqls files, but all of them have to be placed inside the /src/main/resources/schemas directory. Thanks to that, the Netflix DGS library detects and loads them automatically.

GraphQL schema for each entity is located in the separated file. Let’s take a look at the department.graphqls file. There is the QueryResolver with two find methods and the MutationResolver with a single method for adding new departments. We also have an input object for mutation and a standard type definition for queries.

type QueryResolver {
   departments: [Department]
   department(id: ID!): Department!
}

type MutationResolver {
   newDepartment(department: DepartmentInput!): Department
}

input DepartmentInput {
   name: String!
   organizationId: Int
}

type Department {
   id: ID!
   name: String!
   organization: Organization
   employees: [Employee]
}

Then we may take a look at the organization.graphqls file. It is a little bit more complicated than the previous schema. As you see I’m using the keyword extend on QueryResolver and MutationResolver. That’s because we have several files with GraphQL schemas.

extend type QueryResolver {
  organizations: [Organization]
  organization(id: ID!): Organization!
}

extend type MutationResolver {
  newOrganization(organization: OrganizationInput!): Organization
}

input OrganizationInput {
  name: String!
}

type Organization {
  id: ID!
  name: String!
  employees: [Employee]
  departments: [Department]
}

Finally, the schema for the Employee entity. In contrast to the previous schemas, it has objects responsible for filtering like EmployeeFilter. We also need to define the schema object with mutation and query.

extend type QueryResolver {
  employees: [Employee]
  employeesWithFilter(filter: EmployeeFilter): [Employee]
  employee(id: ID!): Employee!
}

extend type MutationResolver {
  newEmployee(employee: EmployeeInput!): Employee
}

input EmployeeInput {
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  organizationId: Int!
  departmentId: Int!
}

type Employee {
  id: ID!
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  department: Department
  organization: Organization
}

input EmployeeFilter {
  salary: FilterField
  age: FilterField
  position: FilterField
}

input FilterField {
  operator: String!
  value: String!
}

schema {
  query: QueryResolver
  mutation: MutationResolver
}

3. Domain Model for GraphQL and Hibernate

We could have generated Java source code using previously defined GraphQL schemas. However, I prefer to use Lombok annotations, so I will do it manually. Here’s the Employee entity corresponding to the Employee object defined in GraphQL schema.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String firstName;
   private String lastName;
   private String position;
   private int salary;
   private int age;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Also, let’s take a look at the Department entity.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

The input objects are much simpler. Just to compare, here’s the DepartmentInput class.

@Data
@NoArgsConstructor
public class DepartmentInput {
   private String name;
   private Integer organizationId;
}

4. Using Netflix DGS with Spring Boot

Netflix DGS provides annotation-based support for Spring Boot. Let’s analyze the most interesting features using the example implementation of a query resolver. The EmployeeFetcher is responsible for defining queries related to the Employee object. We should annotate such a class with @DgsComponent (1). We may create our custom context definition to pass data between different methods or even different query resolvers (2). Then, we have to annotate every query method with @DgsData (3). The fields parentType and fields should match the names declared in GraphQL schemas. We defined three queries in the employee.graphqls file, so we have three methods inside EmployeeFetcher. After fetching all employees, we may save them in our custom context object (4), and then reuse them in other methods or resolvers (5).

The last query method findWithFilter performs advanced filtering based on the dynamic list of fields passed in the input (6). To pass an input parameter we should annotate the method argument with @InputArgument.

@DgsComponent // (1)
public class EmployeeFetcher {

   private EmployeeRepository repository;
   private EmployeeContextBuilder contextBuilder; // (2)

   public EmployeeFetcher(EmployeeRepository repository, 
         EmployeeContextBuilder contextBuilder) {
      this.repository = repository;
      this.contextBuilder = contextBuilder;
    }

   @DgsData(parentType = "QueryResolver", field = "employees") // (3)
   public List<Employee> findAll() {
      List<Employee> employees = (List<Employee>) repository.findAll();
      contextBuilder.withEmployees(employees).build(); // (4)
      return employees;
   }

   @DgsData(parentType = "QueryResolver", field = "employee") 
   public Employee findById(@InputArgument("id") Integer id, 
               DataFetchingEnvironment dfe) {
      EmployeeContext employeeContext = DgsContext.getCustomContext(dfe); // (5)
      List<Employee> employees = employeeContext.getEmployees();
      Optional<Employee> employeeOpt = employees.stream()
         .filter(employee -> employee.getId().equals(id)).findFirst();
      return employeeOpt.orElseGet(() -> 
         repository.findById(id)
            .orElseThrow(DgsEntityNotFoundException::new));
   }

   @DgsData(parentType = "QueryResolver", field = "employeesWithFilter")
   public Iterable<Employee> findWithFilter(@InputArgument("filter") EmployeeFilter filter) { // (6)
      Specification<Employee> spec = null;
      if (filter.getSalary() != null)
         spec = bySalary(filter.getSalary());
      if (filter.getAge() != null)
         spec = (spec == null ? byAge(filter.getAge()) : spec.and(byAge(filter.getAge())));
      if (filter.getPosition() != null)
         spec = (spec == null ? byPosition(filter.getPosition()) :
                spec.and(byPosition(filter.getPosition())));
     if (spec != null)  
        return repository.findAll(spec);
     else
        return repository.findAll();
   }

   private Specification<Employee> bySalary(FilterField filterField) {
      return (root, query, builder) -> 
         filterField.generateCriteria(builder, root.get("salary"));
   }

   private Specification<Employee> byAge(FilterField filterField) {
      return (root, query, builder) -> 
         filterField.generateCriteria(builder, root.get("age"));
   }

   private Specification<Employee> byPosition(FilterField filterField) {
      return (root, query, builder) -> 
         filterField.generateCriteria(builder, root.get("position"));
   }
}

Then, we may switch to the DepartmentFetcher class. It shows the example of relationship fetching. We use DataFetchingEnvironment to detect if the input query contains a relationship field (1). In our case, it may be employees or organization. If any of those fields is defined we add the relation to the JOIN statement (2). We implement the same approach for both findById (3) and findAll methods. However, the findById method also uses data stored in the custom context represented by the EmployeeContext bean (4). If the method findAll in EmployeeFetcher has already been invoked, we can fetch employees assigned to the particular department from the context instead of including the relation to the JOIN statement (5).

@DgsComponent
public class DepartmentFetcher {

   private DepartmentRepository repository;

   DepartmentFetcher(DepartmentRepository repository) {
      this.repository = repository;
   }

   @DgsData(parentType = "QueryResolver", field = "departments")
   public Iterable<Department> findAll(DataFetchingEnvironment environment) {
      DataFetchingFieldSelectionSet s = environment.getSelectionSet(); // (1)
      List<Specification<Department>> specifications = new ArrayList<>();
      if (s.contains("employees") && !s.contains("organization")) // (2)
         return repository.findAll(fetchEmployees());
      else if (!s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchOrganization());
      else if (s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchEmployees().and(fetchOrganization()));
      else
         return repository.findAll();
   }

   @DgsData(parentType = "QueryResolver", field = "department")
   public Department findById(@InputArgument("id") Integer id, 
               DataFetchingEnvironment environment) { // (3)
      Specification<Department> spec = byId(id);
      DataFetchingFieldSelectionSet selectionSet = environment.getSelectionSet();
      EmployeeContext employeeContext = DgsContext.getCustomContext(environment); // (4)
      Set<Employee> employees = null;
      if (selectionSet.contains("employees")) {
         if (employeeContext.getEmployees().size() == 0) // (5)
            spec = spec.and(fetchEmployees());
         else
            employees = employeeContext.getEmployees().stream()
               .filter(emp -> emp.getDepartment().getId().equals(id))
               .collect(Collectors.toSet());
      }
      if (selectionSet.contains("organization"))
         spec = spec.and(fetchOrganization());
      Department department = repository
         .findOne(spec).orElseThrow(DgsEntityNotFoundException::new);
      if (employees != null)
         department.setEmployees(employees);
      return department;
   }

   private Specification<Department> fetchOrganization() {
      return (root, query, builder) -> {
         Fetch<Department, Organization> f = root.fetch("organization", JoinType.LEFT);
         Join<Department, Organization> join = (Join<Department, Organization>) f;
         return join.getOn();
      };
   }

   private Specification<Department> fetchEmployees() {
      return (root, query, builder) -> {
         Fetch<Department, Employee> f = root.fetch("employees", JoinType.LEFT);
         Join<Department, Employee> join = (Join<Department, Employee>) f;
         return join.getOn();
      };
   }

   private Specification<Department> byId(Integer id) {
      return (root, query, builder) -> builder.equal(root.get("id"), id);
   }
}

In comparison to the data fetchers implementation of mutation handlers is rather simple. We just need to define a single method for adding new entities. Here’s the implementation of DepartmentMutation.

@DgsComponent
public class DepartmentMutation {

   private DepartmentRepository departmentRepository;
   private OrganizationRepository organizationRepository;

   DepartmentMutation(DepartmentRepository departmentRepository, 
               OrganizationRepository organizationRepository) {
      this.departmentRepository = departmentRepository;
      this.organizationRepository = organizationRepository;
   }

   @DgsData(parentType = "MutationResolver", field = "newDepartment")
   public Department newDepartment(DepartmentInput input) {
      Organization organization = organizationRepository
         .findById(departmentInput.getOrganizationId())
         .orElseThrow();
      return departmentRepository
         .save(new Department(null, input.getName(), null, organization));
   }

}

5. Running Spring Boot application and testing Netflix GraphQL support

The last step in our exercise is to run and test the Spring Boot application. It inserts some test data to the H2 database on startup. So, let’s just use the GraphiQL tool to run test queries. It is automatically included in the application by the Netflix DGS library. We may display it by invoking the URL http://localhost:8080/graphiql.

In the first step, we run the GraphQL query responsible for fetching all employees with departments. The method that handles the query also builds a custom context and stores there all existing employees.

spring-boot-graphql-netflix-query

Then, we may run a query responsible for finding a single department by its id. We will fetch both relations one-to-many with Employee and many-to-one with Organization.

While the Organization entity is fetched using the JOIN statement, Employee is taken from the context. Here’s the SQL query generated for our current scenario.

spring-boot-graphql-netflix-query-next

Finally, we can test our filtering feature. Let’s filter employees using salary and age criteria.

Let’s take a look at the SQL query for the recently called method.

Final Thoughts

Netflix DGS seems to be an interesting alternative to other libraries that provide support for GraphQL with Spring Boot. It has been open-sourced some weeks ago, but it is rather a stable solution. I guess that before releasing it publicly, the Netflix team has tested it in the battle. I like its annotation-based programming style and several other features. This article will help you in starting with Netflix DGS.

The post An Advanced GraphQL with Spring Boot and Netflix DGS appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/04/08/an-advanced-graphql-with-spring-boot-and-netflix-dgs/feed/ 20 9639
An Advanced Guide to GraphQL with Spring Boot https://piotrminkowski.com/2020/07/31/an-advanced-guide-to-graphql-with-spring-boot/ https://piotrminkowski.com/2020/07/31/an-advanced-guide-to-graphql-with-spring-boot/#comments Fri, 31 Jul 2020 09:31:27 +0000 http://piotrminkowski.com/?p=8220 In this guide I’m going to discuss some more advanced topics related to GraphQL and databases, like filtering or relationship fetching. Of course, before proceeding to the more advanced issues I will take a moment to describe the basics – something you can be found in many other articles. If you already had the opportunity […]

The post An Advanced Guide to GraphQL with Spring Boot appeared first on Piotr's TechBlog.

]]>
In this guide I’m going to discuss some more advanced topics related to GraphQL and databases, like filtering or relationship fetching. Of course, before proceeding to the more advanced issues I will take a moment to describe the basics – something you can be found in many other articles. If you already had the opportunity to familiarize yourself with the concept over GraphQL you may have some questions. Probably one of them is: “Ok. It’s nice. But what if I would like to use GraphQL in the real application that connects to the database and provides API for more advanced queries?”.
If that is your main question, my current article is definitely for you. If you are thinking about using GraphQL in your microservices architecture you may also refer to my previous article GraphQL – The Future of Microservices?.

Example

As you know it is best to learn from examples, so I have created a sample Spring Boot application that exposes API using GraphQL and connects to H2 in-memory database. We will discuss Spring Boot GraphQL JPA support. For integration with the H2 database I’m using Spring Data JPA and Hibernate. I have implemented three entities Employee, Department and Organization – each of them stored in the separated table. A relationship model between them has been visualized in the picture below.

graphql-spring-boot-relations.png

A source code with sample application is available on GitHub in repository: https://github.com/piomin/sample-spring-boot-graphql.git

1. Dependencies

Let’s start from dependencies. Here’s a list of required dependencies for our application. We need to include projects Spring Web, Spring Data JPA and com.database:h2 artifact for embedding in-memory database to our application. I’m also using Spring Boot library offering support for GraphQL. In fact, you may find some other Spring Boot GraphQL JPA libraries, but the one under group com.graphql-java-kickstart (https://www.graphql-java-kickstart.com/spring-boot/) seems to be actively developed and maintained.


<properties>
   <graphql.spring.version>7.1.0</graphql.spring.version>
</properties>
<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-jpa</artifactId>
   </dependency>
   <dependency>
      <groupId>com.h2database</groupId>
      <artifactId>h2</artifactId>
      <scope>runtime</scope>
   </dependency>
   <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
   </dependency>
   <dependency>
      <groupId>com.graphql-java-kickstart</groupId>
      <artifactId>graphql-spring-boot-starter</artifactId>
      <version>${graphql.spring.version}</version>
   </dependency>
   <dependency>
      <groupId>com.graphql-java-kickstart</groupId>
      <artifactId>graphiql-spring-boot-starter</artifactId>
      <version>${graphql.spring.version}</version>
   </dependency>
   <dependency>
      <groupId>com.graphql-java-kickstart</groupId>
      <artifactId>graphql-spring-boot-starter-test</artifactId>
      <version>${graphql.spring.version}</version>
      <scope>test</scope>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
   </dependency>
</dependencies>

2. Schemas

We are starting implementation from defining GraphQL schemas with objects, queries and mutations definitions. The files are located inside /src/main/resources/graphql directory and after adding graphql-spring-boot-starter they are automatically detected by the application basing on their suffix *.graphqls.
GraphQL schema for each entity is located in the separated file. Let’s take a look on department.graphqls. It’s a very trivial definition.

type QueryResolver {
    departments: [Department]
    department(id: ID!): Department!
}

type MutationResolver {
    newDepartment(department: DepartmentInput!): Department
}

input DepartmentInput {
    name: String!
    organizationId: Int
}

type Department {
    id: ID!
    name: String!
    organization: Organization
    employees: [Employee]
}

Here’s the schema inside file organization.graphqls. As you see I’m using keyword extend on QueryResolver and MutationResolver.

extend type QueryResolver {
    organizations: [Organization]
    organization(id: ID!): Organization!
}

extend type MutationResolver {
    newOrganization(organization: OrganizationInput!): Organization
}

input OrganizationInput {
    name: String!
}

type Organization {
    id: ID!
    name: String!
    employees: [Employee]
    departments: [Department]
}

Schema for Employee is a little bit more complicated than two previously demonstrated schemas. I have defined an input object for filtering. It will be discussed in the next section in detail.

extend type QueryResolver {
  employees: [Employee]
  employeesWithFilter(filter: EmployeeFilter): [Employee]
  employee(id: ID!): Employee!
}

extend type MutationResolver {
  newEmployee(employee: EmployeeInput!): Employee
}

input EmployeeInput {
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  organizationId: Int!
  departmentId: Int!
}

type Employee {
  id: ID!
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  department: Department
  organization: Organization
}

input EmployeeFilter {
  salary: FilterField
  age: FilterField
  position: FilterField
}

input FilterField {
  operator: String!
  value: String!
}

schema {
  query: QueryResolver
  mutation: MutationResolver
}

3. Domain model

Let’s take a look at the corresponding domain model. Here’s Employee entity. Each Employee is assigned to a single Department and Organization.

@Entity
@Data
@NoArgsConstructor
@AllArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String firstName;
   private String lastName;
   private String position;
   private int salary;
   private int age;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Here’s Department entity. It contains a list of employees and a reference to a single organization.


@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

And finally Organization entity.

@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Organization {
   @Id
   @GeneratedValue
@EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "organization")
   private Set<Department> departments;
   @OneToMany(mappedBy = "organization")
   private Set<Employee> employees;
}

Entity classes are returned as a result by queries. In mutations we are using input objects that have slightly different implementation. They do not contain reference to a relationship, but only an id of related objects.

@Data
@AllArgsConstructor
@NoArgsConstructor
public class DepartmentInput {
   private String name;
   private Integer organizationId;
}

4. Fetch relations

As you probably figured out, all the JPA relations are configured in lazy mode. To fetch them we should explicitly set such a request in our GraphQL query. For example, we may query all departments and fetch organization to each of the department returned on the list.


{
  departments {
    id
    name
    organization {
      id
      name
    }
  }
}

Now, the question is how to handle it on the server side. The first thing we need to do is to detect the existence of such a relationship field in our GraphQL query. Why? Because we need to avoid possible N+1 problem, which happens when the data access framework executes N additional SQL statements to fetch the same data that could have been retrieved when executing the primary SQL query. So, we need to prepare different JPA queries depending on the parameters set in the GraphQL query. We may do it in several ways, but the most convenient way is by using DataFetchingEnvironment parameter inside QueryResolver implementation.
Let’s take a look on the implementation of QueryResolver for Department. If we annotate class that implements GraphQLQueryResolver with @Component it is automatically detected by Spring Boot (thanks to graphql-spring-boot-starter). Then we are adding DataFetchingEnvironment as a parameter to each query. After that we should invoke method getSelectionSet() on DataFetchingEnvironment object and check if it contains word organization (for fetching Organization) or employees (for fetching list of employees). Depending on requested relations we build different queries. In the following fragment of code we have two methods implemented for DepartmentQueryResolver: findAll and findById.

@Component
public class DepartmentQueryResolver implements GraphQLQueryResolver {

   private DepartmentRepository repository;

   DepartmentQueryResolver(DepartmentRepository repository) {
      this.repository = repository;
   }

   public Iterable<Department> departments(DataFetchingEnvironment environment) {
      DataFetchingFieldSelectionSet s = environment.getSelectionSet();
      List<Specification<Department>> specifications = new ArrayList<>();
      if (s.contains("employees") && !s.contains("organization"))
         return repository.findAll(fetchEmployees());
      else if (!s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchOrganization());
      else if (s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchEmployees().and(fetchOrganization()));
      else
         return repository.findAll();
   }

   public Department department(Integer id, DataFetchingEnvironment environment) {
      Specification<Department> spec = byId(id);
      DataFetchingFieldSelectionSet selectionSet = environment.getSelectionSet();
      if (selectionSet.contains("employees"))
         spec = spec.and(fetchEmployees());
      if (selectionSet.contains("organization"))
         spec = spec.and(fetchOrganization());
      return repository.findOne(spec).orElseThrow(NoSuchElementException::new);
   }
   
   // REST OF IMPLEMENTATION ...
}

The most convenient way to build dynamic queries is by using JPA Criteria API. To be able to use it with Spring Data JPA we first need to extend our repository interface with JpaSpecificationExecutor interface. After that you may use the additional interface methods that let you execute specifications in a variety of ways. You may choose between findAll and findOne methods.

public interface DepartmentRepository extends CrudRepository<Department, Integer>,
      JpaSpecificationExecutor<Department> {

}

Finally, we may just prepare methods that build Specification the object. This object contains a predicate. In that case we are using three predicates for fetching organization, employees and filtering by id.

private Specification<Department> fetchOrganization() {
   return (Specification<Department>) (root, query, builder) -> {
      Fetch<Department, Organization> f = root.fetch("organization", JoinType.LEFT);
      Join<Department, Organization> join = (Join<Department, Organization>) f;
      return join.getOn();
   };
}

private Specification<Department> fetchEmployees() {
   return (Specification<Department>) (root, query, builder) -> {
      Fetch<Department, Employee> f = root.fetch("employees", JoinType.LEFT);
      Join<Department, Employee> join = (Join<Department, Employee>) f;
      return join.getOn();
   };
}

private Specification<Department> byId(Integer id) {
   return (Specification<Department>) (root, query, builder) -> builder.equal(root.get("id"), id);
}

5. Filtering

For a start, let’s refer to the section 2 – Schemas. Inside employee.graphqls I defined two additional inputs FilterField and EmployeeFilter, and also a single method employeesWithFilter that takes EmployeeFilter as an argument. The FieldFilter class is my custom implementation of a filter for GraphQL queries. It is very trivial. It provides an implementation of two filter types: for number or for string. It generates JPA Criteria Predicate. Of course, instead creating such filter implementation by yourself (like me), you may leverage some existing libraries for that. However, it does not require much time to do it by yourself as you see in the following code. Our custom filter implementation has two parameters: operator and value.

@Data
public class FilterField {
   private String operator;
   private String value;

   public Predicate generateCriteria(CriteriaBuilder builder, Path field) {
      try {
         int v = Integer.parseInt(value);
         switch (operator) {
         case "lt": return builder.lt(field, v);
         case "le": return builder.le(field, v);
         case "gt": return builder.gt(field, v);
         case "ge": return builder.ge(field, v);
         case "eq": return builder.equal(field, v);
         }
      } catch (NumberFormatException e) {
         switch (operator) {
         case "endsWith": return builder.like(field, "%" + value);
         case "startsWith": return builder.like(field, value + "%");
         case "contains": return builder.like(field, "%" + value + "%");
         case "eq": return builder.equal(field, value);
         }
      }

      return null;
   }
}

Now, with FilterField we may create a concrete implementation of filters consisting of several simple FilterField. The example of such implementation is EmployeeFilter class that has three possible criterias of filtering by salary, age and position.

@Data
public class EmployeeFilter {
   private FilterField salary;
   private FilterField age;
   private FilterField position;
}

Now if you would like to use that filter in your GraphQL query you should create something like that. In that query we are searching for all developers that has a salary greater than 12000 and age greater than 30 years.

{
  employeesWithFilter(filter: {
    salary: {
      operator: "gt"
      value: "12000"
    },
    age: {
      operator: "gt"
      value: "30"
    },
    position: {
      operator: "eq",
      value: "Developer"
    }
  }) {
    id
    firstName
    lastName
    position
  }
}

Let’s take a look at the implementation of query resolver. The same as for fetching relations we are using JPA Criteria API and Specification class. I have three methods that creates Specification for each of possible filter fields. Then I’m building dynamically filtering criterias based on the content of EmployeeFilter.

@Component
public class EmployeeQueryResolver implements GraphQLQueryResolver {

   private EmployeeRepository repository;

   EmployeeQueryResolver(EmployeeRepository repository) {
      this.repository = repository;
   }

   // OTHER FIND METHODS ...
   
   public Iterable<Employee&qt; employeesWithFilter(EmployeeFilter filter) {
      Specification<Employee&qt; spec = null;
      if (filter.getSalary() != null)
         spec = bySalary(filter.getSalary());
      if (filter.getAge() != null)
         spec = (spec == null ? byAge(filter.getAge()) : spec.and(byAge(filter.getAge())));
      if (filter.getPosition() != null)
         spec = (spec == null ? byPosition(filter.getPosition()) :
               spec.and(byPosition(filter.getPosition())));
      if (spec != null)
         return repository.findAll(spec);
      else
         return repository.findAll();
   }

   private Specification<Employee&qt; bySalary(FilterField filterField) {
      return (Specification<Employee&qt;) (root, query, builder) -&qt; filterField.generateCriteria(builder, root.get("salary"));
   }

   private Specification<Employee&qt; byAge(FilterField filterField) {
      return (Specification<Employee&qt;) (root, query, builder) -&qt; filterField.generateCriteria(builder, root.get("age"));
   }

   private Specification<Employee&qt; byPosition(FilterField filterField) {
      return (Specification<Employee&qt;) (root, query, builder) -&qt; filterField.generateCriteria(builder, root.get("position"));
   }
}

6. Testing Spring Boot GraphQL JPA support

We will insert some test data into the H2 database by defining data.sql inside src/main/resources directory.

insert into organization (id, name) values (1, 'Test1');
insert into organization (id, name) values (2, 'Test2');
insert into organization (id, name) values (3, 'Test3');
insert into organization (id, name) values (4, 'Test4');
insert into organization (id, name) values (5, 'Test5');
insert into department (id, name, organization_id) values (1, 'Test1', 1);
insert into department (id, name, organization_id) values (2, 'Test2', 1);
insert into department (id, name, organization_id) values (3, 'Test3', 1);
insert into department (id, name, organization_id) values (4, 'Test4', 2);
insert into department (id, name, organization_id) values (5, 'Test5', 2);
insert into department (id, name, organization_id) values (6, 'Test6', 3);
insert into department (id, name, organization_id) values (7, 'Test7', 4);
insert into department (id, name, organization_id) values (8, 'Test8', 5);
insert into department (id, name, organization_id) values (9, 'Test9', 5);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (1, 'John', 'Smith', 'Developer', 10000, 30, 1, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (2, 'Adam', 'Hamilton', 'Developer', 12000, 35, 1, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (3, 'Tracy', 'Smith', 'Architect', 15000, 40, 1, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (4, 'Lucy', 'Kim', 'Developer', 13000, 25, 2, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (5, 'Peter', 'Wright', 'Director', 50000, 50, 4, 2);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (6, 'Alan', 'Murray', 'Developer', 20000, 37, 4, 2);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (7, 'Pamela', 'Anderson', 'Analyst', 7000, 27, 4, 2);

Now, we can easily perform some test queries by using GraphiQL that is embedded into our application and available under address http://localhost:8080/graphiql after startup. First, let’s verify the filtering query.

graphql-spring-boot-query-1

Now, we may test fetching by searching Department by id and fetching a list of employees and organization.

graphql-spring-boot-query-2

The post An Advanced Guide to GraphQL with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/07/31/an-advanced-guide-to-graphql-with-spring-boot/feed/ 6 8220
Distributed Transactions in Microservices with Spring Boot https://piotrminkowski.com/2020/06/19/distributed-transactions-in-microservices-with-spring-boot/ https://piotrminkowski.com/2020/06/19/distributed-transactions-in-microservices-with-spring-boot/#comments Fri, 19 Jun 2020 10:13:34 +0000 http://piotrminkowski.com/?p=8144 When I’m talking about microservices with other people they are often asking me about an approach to distributed transactions. My advice is always the same – try to completely avoid distributed transactions in your microservices architecture. It is a very complex process with a lot of moving parts that can fail. That’s why it does […]

The post Distributed Transactions in Microservices with Spring Boot appeared first on Piotr's TechBlog.

]]>
When I’m talking about microservices with other people they are often asking me about an approach to distributed transactions. My advice is always the same – try to completely avoid distributed transactions in your microservices architecture. It is a very complex process with a lot of moving parts that can fail. That’s why it does not fit the nature of microservices-based systems.

However, if for any reason you require to use distributed transactions, there are two popular approaches for that: Two Phase Commit Protocol and Eventual Consistency and Compensation also known as Saga pattern. You can read some interesting articles about it online. Most of them are discussing theoretical aspects related two those approaches, so in this article, I’m going to present the sample implementation in Spring Boot. It is worth mentioning that there are some ready implementations of Saga pattern like support for complex business transactions provided by Axon Framework. The documentation of this solution is available here: https://docs.axoniq.io/reference-guide/implementing-domain-logic/complex-business-transactions.

Example

The source code with sample applications is as usual available on GitHub in the repository: https://github.com/piomin/sample-spring-microservices-transactions.git.

Architecture

First, we need to add a new component to our system. It is responsible just for managing distributed transactions across microservices. That element is described as transaction-server on the diagram below. We also use another popular component in microservices-based architecture discovery-server. There are three applications: order-service, account-service and product-service. The application order-service is communicating with account-service and product-service. All these applications are using Postgres database as a backend store. Just for simplification I have run a single database with multiple tables. In a normal situation we would have a single database per each microservice.

spring-microservice-transactions-arch1

Now, we will consider the following situation (it is visualized on the diagram below). The application order-service is creating an order, storing it in the database, and then starting a new distributed transaction (1). After that, it is communicating with application product-service to update the current number of stored products and get their price (2). At the same time product-service is sending information to transaction-server that it is participating in the transaction (3). Then order-service is trying to withdraw the required funds from the customer account and transfer them into another account related to a seller (4). Finally, we are rolling back the transaction by throwing an exception inside the transaction method from order-service (6). This rollback should cause a rollback of the whole distributed transaction.

spring-microservices-transactions-arch2 (1)

Building transaction server

We are starting implementation from transaction-server. A transaction server is responsible for managing distributed transactions across all microservices in our sample system. It exposes REST API available for all other microservices for adding new transactions and updating their status. It also sends asynchronous broadcast events after receiving transaction confirmation or rollback from a source microservice. It uses RabbitMQ message broker for sending events to other microservices via topic exchange. All other microservices are listening for incoming events, and after receiving them they are committing or rolling back transactions. We can avoid using a message broker for exchanging events and use communication over HTTP endpoints, but that makes sense only if we have a single instance of every microservice. Here’s the picture that illustrates the currently described architecture.

spring-microservice-transactions-server (1)

Let’s take a look on the list of required dependencies. It would be pretty the same for other applications. We need spring-boot-starter-amqp for integration with RabbitMQ, spring-boot-starter-web for exposing REST API over HTTP, spring-cloud-starter-netflix-eureka-client for integration with Eureka discovery server and some basic Kotlin libraries.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.module</groupId>
   <artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

In the main class we are defining a topic exchange for events sent to microservices. The name of exchange is trx-events, and it is automatically created on RabbitMQ after application startup.

@SpringBootApplication
class TransactionServerApp {

    @Bean
    fun topic(): TopicExchange = TopicExchange("trx-events")
}

fun main(args: Array) {
    runApplication(*args)
}

Here are domain model classes used by a transaction server. The same classes are used by the microservices during communication with transaction-server.

data class DistributedTransaction(var id: String? = null,var status: DistributedTransactionStatus,
                                  val participants: MutableList<DistributedTransactionParticipant> = mutableListOf())
                          
class DistributedTransactionParticipant(val serviceId: String, var status: DistributedTransactionStatus)

enum class DistributedTransactionStatus {
    NEW, CONFIRMED, ROLLBACK, TO_ROLLBACK
}   

Here’s the controller class. It is using a simple in-memory implementation of repository and RabbitTemplate for sending events to RabbitMQ. The HTTP API provides methods for adding new transaction, finishing existing transaction with a given status (CONFIRM or ROLLBACK), searching transaction by id and adding participants (new services) into a transaction.

@RestController
@RequestMapping("/transactions")
class TransactionController(val repository: TransactionRepository,
                            val template: RabbitTemplate) {

    @PostMapping
    fun add(@RequestBody transaction: DistributedTransaction): DistributedTransaction =
            repository.save(transaction)

    @GetMapping("/{id}")
    fun findById(@PathVariable id: String): DistributedTransaction? = repository.findById(id)

    @PutMapping("/{id}/finish/{status}")
    fun finish(@PathVariable id: String, @PathVariable status: DistributedTransactionStatus) {
        val transaction: DistributedTransaction? = repository.findById(id)
        if (transaction != null) {
            transaction.status = status
            repository.update(transaction)
            template.convertAndSend("trx-events", DistributedTransaction(id, status))
        }
    }

    @PutMapping("/{id}/participants")
    fun addParticipant(@PathVariable id: String,
                       @RequestBody participant: DistributedTransactionParticipant) =
            repository.findById(id)?.participants?.add(participant)

    @PutMapping("/{id}/participants/{serviceId}/status/{status}")
    fun updateParticipant(@PathVariable id: String,
                          @PathVariable serviceId: String,
                          @PathVariable status: DistributedTransactionStatus) {
        val transaction: DistributedTransaction? = repository.findById(id)
        if (transaction != null) {
            val index = transaction.participants.indexOfFirst { it.serviceId == serviceId }
            if (index != -1) {
                transaction.participants[index].status = status
                template.convertAndSend("trx-events", DistributedTransaction(id, status))
            }
        }
    }

}   

Handling transactions in downstream services

Let’s analyze how our microservices are handling transactions on the example of account. Here’s the implementation of AccountService that is called by the controller for transfering funds from/to account. All methods here are @Transactional and here we need an attention – @Async. It means that each method is running in a new thread and is processing asynchronously. Why? That’s a key concept here. We will block the transaction in order to wait for confirmation from transaction-server, but the main thread used by the controller will not be blocked. It returns the response with the current state of Account immediately.

@Service
@Transactional
@Async
class AccountService(val repository: AccountRepository,
                     var applicationEventPublisher: ApplicationEventPublisher) {
    
    fun payment(id: Int, amount: Int, transactionId: String) =
            transfer(id, amount, transactionId)

    fun withdrawal(id: Int, amount: Int, transactionId: String) =
            transfer(id, (-1) * amount, transactionId)

    private fun transfer(id: Int, amount: Int, transactionId: String) {
        val accountOpt: Optional<Account> = repository.findById(id)
        if (accountOpt.isPresent) {
            val account: Account = accountOpt.get()
            account.balance += amount
            applicationEventPublisher.publishEvent(AccountTransactionEvent(transactionId, account))
            repository.save(account)
        }
    }

}

Here’s the implementation of @Controller class. As you see it is calling methods from AccountService, that are being processed asynchronously. The returned Account object is taken from EventBus bean. This bean is responsible for exchanging asynchronous events within the application scope. En event is sent by the AccountTransactionListener bean responsible for handling Spring transaction events.

@RestController
@RequestMapping("/accounts")
class AccountController(val repository: AccountRepository,
                        val service: AccountService,
                        val eventBus: EventBus) {

    @PostMapping
    fun add(@RequestBody account: Account): Account = repository.save(account)

    @GetMapping("/customer/{customerId}")
    fun findByCustomerId(@PathVariable customerId: Int): List<Account> =
            repository.findByCustomerId(customerId)

    @PutMapping("/{id}/payment/{amount}")
    fun payment(@PathVariable id: Int, @PathVariable amount: Int,
                @RequestHeader("X-Transaction-ID") transactionId: String): Account {
        service.payment(id, amount, transactionId)
        return eventBus.receiveEvent(transactionId)!!.account
    }

    @PutMapping("/{id}/withdrawal/{amount}")
    fun withdrawal(@PathVariable id: Int, @PathVariable amount: Int,
                   @RequestHeader("X-Transaction-ID") transactionId: String): Account {
        service.withdrawal(id, amount, transactionId)
        return eventBus.receiveEvent(transactionId)!!.account
    }

}

The event object exchanged between bean is very simple. It contains an id of transaction and the current Account object.


class AccountTransactionEvent(val transactionId: String, val account: Account)

Finally, let’s take a look at the implementation of AccountTransactionListener bean responsible for handling transactional events. We are using Spring @TransactionalEventListener for annotating methods that should handle incoming events. There are 4 possible event types to handle: BEFORE_COMMIT, AFTER_COMMIT, AFTER_ROLLBACK and AFTER_COMPLETION. There is one very important thing in @TransactionalEventListener, which may be not very intuitive. It is being processed in the same thread as the transaction. So if you would do something that should not block the thread with transaction you should annotate it with @Async. However, in our case this behaviour is required, since we need to block a transactional thread until we receive a confirmation or rollback from transaction-server for a given transaction. These events are sent by transaction-server through RabbitMQ, and they are also exchanged between beans using EventBus. If the status of the received event is different than CONFIRMED we are throwing the exception to rollback transaction.
The AccountTransactionListener is also listening on AFTER_ROLLBACK and AFTER_COMPLETION. After receiving such an event type it is changing the status of the transaction by calling endpoint exposed by transaction-server.

@Component
class AccountTransactionListener(val restTemplate: RestTemplate,
                                 val eventBus: EventBus) {

    @TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
    @Throws(AccountProcessingException::class)
    fun handleEvent(event: AccountTransactionEvent) {
        eventBus.sendEvent(event)
        var transaction: DistributedTransaction? = null
        for (x in 0..100) {
            transaction = eventBus.receiveTransaction(event.transactionId)
            if (transaction == null)
                Thread.sleep(100)
            else break
        }
        if (transaction == null || transaction.status != DistributedTransactionStatus.CONFIRMED)
            throw AccountProcessingException()
    }

    @TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
    fun handleAfterRollback(event: AccountTransactionEvent) {
        restTemplate.put("http://transaction-server/transactions/transactionId/participants/{serviceId}/status/{status}",
                null, "account-service", "TO_ROLLBACK")
    }

    @TransactionalEventListener(phase = TransactionPhase.AFTER_COMPLETION)
    fun handleAfterCompletion(event: AccountTransactionEvent) {
        restTemplate.put("http://transaction-server/transactions/transactionId/participants/{serviceId}/status/{status}",
                null, "account-service", "CONFIRM")
    }
    
}

Here’s the implementation of the bean responsible for receiving asynchronous events from a message broker. As you see after receiving such an event it is using EventBus to forward that event to other beans.

@Component
class DistributedTransactionEventListener(val eventBus: EventBus) {

    @RabbitListener(bindings = [
        QueueBinding(exchange = Exchange(type = ExchangeTypes.TOPIC, name = "trx-events"),
                value = Queue("trx-events-account"))
    ])
    fun onMessage(transaction: DistributedTransaction) {
        eventBus.sendTransaction(transaction)
    }

}

Integration with database

Of course our application is using Postgres as a backend store, so we need to provide integration. In fact, that is the simplest step of our implementation. First we need to add the following 2 dependencies. We will use Spring Data JPA for integration with Postgres.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <scope>runtime</scope>
</dependency>

Our entity is very simple. Besides the id field it contains two fields: customerId and balance.


@Entity
data class Account(@Id @GeneratedValue(strategy = GenerationType.AUTO) val id: Int,
                   val customerId: Int,
                   var balance: Int)

We are using the well-known Spring Data repository pattern.

interface AccountRepository: CrudRepository<Account, Int> {

    fun findByCustomerId(id: Int): List<Account>

}

Here’s the suggested list of configuration settings.

spring:
  application:
    name: account-service
  datasource:
    url: jdbc:postgresql://postgresql:5432/trx
    username: trx
    password: trx
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
    hibernate:
      ddl-auto: create
    show-sql: true
    properties:
      hibernate:
        format_sql: true
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Building order-service

Ok, we have already finished the implementation of transaction-server, and two microservices account-service and product-service. Since the implementation of product-service is very similar to account-service, I have explained everything on the example of account-service. Now, we may proceed to the last part – the implementation of order-service. It is responsible for starting a new transaction and marking it as finished. It also may finish it with rollback.Of course, rollback events may be sent by another two applications as well.
The implementation of @Controller class is visible below. I’ll describe it step by step. We are starting a new distributed transaction by calling POST /transactions endpoint exposed by transaction-server (1). Then we are storing a new order in database (2). When we are calling a transactional method from downstream service we need to set HTTP header X-Transaction-ID. The first transactional method that is called here is PUT /products/{id}/count/{count}(3). It updates the number of products in the store and calculates a final price (4). In the step it is calling another transaction method – this time from account-service (5). It is responsible for withdrawing money from customer accounts. We are enabling Spring transaction events processing (6). In the last step we are generating a random number, and then basing on its value application is throwing an exception to rollback transaction (7).

@RestController
@RequestMapping("/orders")
class OrderController(val repository: OrderRepository,
                      val restTemplate: RestTemplate,
          var applicationEventPublisher: ApplicationEventPublisher) {

    @PostMapping
    @Transactional
    @Throws(OrderProcessingException::class)
    fun addAndRollback(@RequestBody order: Order) {
        var transaction  = restTemplate.postForObject("http://transaction-server/transactions",
                DistributedTransaction(), DistributedTransaction::class.java) // (1)
        val orderSaved = repository.save(order) // (2)
        val product = updateProduct(transaction!!.id!!, order) // (3)
        val totalPrice = product.price * product.count // (4)
        val accounts = restTemplate.getForObject("http://account-service/accounts/customer/{customerId}",
                Array<Account>::class.java, order.customerId)
        val account  = accounts!!.first { it.balance >= totalPrice}
        updateAccount(transaction.id!!, account.id, totalPrice) // (5)
        applicationEventPublisher.publishEvent(OrderTransactionEvent(transaction.id!!)) // (6)
        val r = Random.nextInt(100) // (7)
        if (r % 2 == 0)
            throw OrderProcessingException()
    }

    fun updateProduct(transactionId: String, order: Order): Product {
        val headers = HttpHeaders()
        headers.set("X-Transaction-ID", transactionId)
        val entity: HttpEntity<*> = HttpEntity<Any?>(headers)
        val product = restTemplate.exchange("http://product-service/products/{id}/count/{count}",
                HttpMethod.PUT, null, Product::class.java, order.id, order.count)
        return product.body!!
    }

    fun updateAccount(transactionId: String, accountId: Int, totalPrice: Int): Account {
        val headers = HttpHeaders()
        headers.set("X-Transaction-ID", transactionId)
        val entity: HttpEntity<*> = HttpEntity<Any?>(headers)
        val account = restTemplate.exchange("http://account-service/accounts/{id}/withdrawal/{amount}",
                HttpMethod.PUT, null, Account::class.java, accountId, totalPrice)
        return account.body!!
    }
}

Conclusion

Even a trivial implementation of distributed transactions in microservices, like the one, demonstrated in this article, can be complicated. As you see we need to add a new element to our architecture, transaction-server, responsible only for distributed transaction management. We also have to add a message broker in order to exchange events between our applications and transaction-server. However, many of you were asking me about distributed transactions in the microservices world, so I decided to build that simple demo. I’m waiting for your feedback and opinions.

The post Distributed Transactions in Microservices with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/19/distributed-transactions-in-microservices-with-spring-boot/feed/ 19 8144
JPA Data Access with Micronaut Data https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/ https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/#respond Thu, 25 Jul 2019 11:58:39 +0000 https://piotrminkowski.wordpress.com/?p=7196 When I have been writing some articles comparing Spring and Micronaut frameworks recently, I have taken note of many comments about the lack of built-in ORM and data repositories supported by Micronaut. Spring provides this feature for a long time through the Spring Data project. The good news is that the Micronaut team is close […]

The post JPA Data Access with Micronaut Data appeared first on Piotr's TechBlog.

]]>
When I have been writing some articles comparing Spring and Micronaut frameworks recently, I have taken note of many comments about the lack of built-in ORM and data repositories supported by Micronaut. Spring provides this feature for a long time through the Spring Data project. The good news is that the Micronaut team is close to complete work on the first version of their project with ORM support. The project called Micronaut Data (old Micronaut Predator) (short for Precomputed Data Repositories) is still under active development, and currently we may access just the snapshot version. However, the authors are introducing it as more efficient with reduced memory consumption than competitive solutions like Spring Data or Grails GORM. In short, this could be achieved thanks to Ahead of Time (AoT) compilation to pre-compute queries for repository interfaces that are then executed by a thin, lightweight runtime layer, and avoiding usage of reflection or runtime proxies.

Currently, Micronaut Predator provides runtime support for JPA (Hibernate) and SQL (JDBC). Some other implementations are planned in the future. In this article I’m going to show you how to include Micronaut Data in your application and use its main features for providing JPA data access.

1. Dependencies

The snapshot dependency of Micronaut Predator is available at https://oss.sonatype.org/content/repositories/snapshots/, so first we need to include it to the repository list in our pom.xml together with jcenter:

<repositories>
   <repository>
      <id>jcenter.bintray.com</id>
      <url>https://jcenter.bintray.com</url>
   </repository>
   <repository>
      <id>sonatype-snapshots</id>
      <url>https://oss.sonatype.org/content/repositories/snapshots/</url>
   </repository>
</repositories>

In addition to the standard libraries included for building a web application with Micronaut, we have to add the following dependencies: database driver (we will use PostgreSQL as the database for our sample application) and micronaut-predator-hibernate-jpa.

<dependency>
   <groupId>io.micronaut.data</groupId>
   <artifactId>micronaut-predator-hibernate-jpa</artifactId>
   <version>${predator.version}</version>
   <scope>compile</scope>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-jdbc-tomcat</artifactId>
   <scope>runtime</scope>
</dependency>    
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.2.6</version>
</dependency> 

Some Micronaut libraries including micronaut-predator-processor have to be added to the annotation processor path. Such a configuration should be provided inside Maven Compiler Plugin configuration:

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.7.0</version>
   <configuration>
      <source>${jdk.version}</source>
      <target>${jdk.version}</target>
      <encoding>UTF-8</encoding>
      <compilerArgs>
         <arg>-parameters</arg>
      </compilerArgs>
      <annotationProcessorPaths>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-inject-java</artifactId>
            <version>${micronaut.version}</version>
         </path>
         <path>
            <groupId>io.micronaut.data</groupId>
            <artifactId>micronaut-predator-processor</artifactId>
            <version>${predator.version}</version>
         </path>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-validation</artifactId>
            <version>${micronaut.version}</version>
         </path>
      </annotationProcessorPaths>
   </configuration>
</plugin>

The current newest RC version of Micronaut is 1.2.0.RC:

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.micronaut</groupId>
         <artifactId>micronaut-bom</artifactId>
         <version>1.2.0.RC2</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

2. Domain Model

Our database model consists of four tables as shown below. The same database model has been used for some of my previous examples including those for Spring Data usage. We have employee table. Each employee is assigned to the exactly one department and one organization. Each department is assigned to exactly one organization. There is also table employment, which provides a history of employment for every single employee.

micronaut-data-jpa-1

Here is the implementation of entity classes corresponding to the database model. Let’s start from Employee class:

@Entity
public class Employee {

   @Id
   @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "employee_id_seq")
   @SequenceGenerator(name = "employee_id_seq", sequenceName = "employee_id_seq", allocationSize = 1)
   private Long id;
   private String name;
   private int age;
   private String position;
   private int salary;
   @ManyToOne
   private Organization organization;
   @ManyToOne
   private Department department;
   @OneToMany
   private Set<Employment> employments;
   
   // ... GETTERS AND SETTERS
}

Here’s the implementation of Department class:


@Entity
public class Department {

   @Id
   @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq")
   @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq", allocationSize = 1)
   private Long id;
   private String name;
   @OneToMany
   private Set<Employee> employees;
   @ManyToOne
   private Organization organization;
   
   // ... GETTERS AND SETTERS
}

And here’s Organization entity:


@Entity
public class Organization {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "organization_id_seq")
    @SequenceGenerator(name = "organization_id_seq", sequenceName = "organization_id_seq", allocationSize = 1)
    private Long id;
    private String name;
    private String address;
    @OneToMany
    private Set<Department> departments;
    @OneToMany
    private Set<Employee> employees;
   
   // ... GETTERS AND SETTERS
}

And the last entity Employment:

@Entity
public class Employment {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "employment_id_seq")
    @SequenceGenerator(name = "employment_id_seq", sequenceName = "employment_id_seq", allocationSize = 1)
    private Long id;
    @ManyToOne
    private Employee employee;
    @ManyToOne
    private Organization organization;
    @Temporal(TemporalType.DATE)
    private Date start;
    @Temporal(TemporalType.DATE)
    private Date end;
   
   // ... GETTERS AND SETTERS
}

3. Creating JPA repositories with Micronaut Data

If you are familiar with the Spring Data repositories pattern, you won’t have any problems when using Micronaut repositories. The approach to declaring repositories and building queries is the same as in Spring Data. You need to declare an interface (or an abstract class) annotated with @Repository that extends interface CrudRepository. CrudRepository is not the only one interface that can be extended. You can also use GenericRepository, AsyncCrudRepository for asynchronous operations, ReactiveStreamsCrudRepository for reactive CRUD execution or PageableRepository that adds methods for pagination. The typical repository declaration looks like as shown below.

@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    Set<EmployeeDTO> findBySalaryGreaterThan(int salary);

    Set<EmployeeDTO> findByOrganization(Organization organization);

    int findAvgSalaryByAge(int age);

    int findAvgSalaryByOrganization(Organization organization);

}

I have declared there some additional find methods. The most common query prefix is found, but you can also use search, query, get, read, or retrieve. The first two queries return all employees with a salary greater than a given value and all employees assigned to a given organization. The Employee entity is in many-to-one relation with Organization, so we may also use relational fields as query parameters. It is noteworthy that both two queries return DTO objects as a result inside the collection. That’s possible because Micronaut Predator supports reflection-free Data Transfer Object (DTO) projections if the return type is annotated with @Introspected. Here’s the declaration of EmployeeDTO.

@Introspected
public class EmployeeDTO {

    private String name;
    private int age;
    private String position;
    private int salary;
   
    // ... GETTERS AND SETTERS
}

The EmployeeRepository contains two other methods using aggregation expressions. Method findAvgSalaryByAge counts average salary by a given age of employees, while findAvgSalaryByOrganization counts avarage salary by a given organization.
For comparison, let’s take a look on another repository implementation EmploymentRepository. We need two additional find methods. First findByEmployeeOrderByStartDesc for searching employment history for a given employee ordered by start date. The second method finds employment without an end date set, which in fact means that’s the employment for a current job.

@Repository
public interface EmploymentRepository extends CrudRepository<Employment, Long> {

    Set<EmploymentDTO> findByEmployeeOrderByStartDesc(Employee employee);

    Employment findByEmployeeAndEndIsNull(Employee employee);

}

Micronaut Predator is able to automatically manage transactions. You just need to annotate your method with @Transactional. In the source code fragment visible below you may see the method used for changing a job by an employee. We are performing a bunch of save operations inside that method. First, we change the target department and organization for a given employee, then we are creating new employment history record for a new job, and also setting end date for the previous employment entity (found using repository method findByEmployeeAndEndIsNull).

@Inject
DepartmentRepository departmentRepository;
@Inject
EmployeeRepository employeeRepository;
@Inject
EmploymentRepository employmentRepository;

@Transactional
public void changeJob(Long employeeId, Long targetDepartmentId) {
   Optional<Employee> employee = employeeRepository.findById(employeeId);
   employee.ifPresent(employee1 -> {
      Optional<Department> department = departmentRepository.findById(targetDepartmentId);
      department.ifPresent(department1 -> {
         employee1.setDepartment(department1);
         employee1.setOrganization(department1.getOrganization());
         Employment employment = new Employment(employee1, department1.getOrganization(), new Date());
         employmentRepository.save(employment);
         Employment previousEmployment = employmentRepository.findByEmployeeAndEndIsNull(employee1);
         previousEmployment.setEnd(new Date());
         employmentRepository.save(previousEmployment);
      });
   });
}

Ok, now let’s move on to the last repository implementation discussed in this section – OrganizationRepository. Since Organization entity is in lazy load one-to-many relation with Employee and Department, we need to fetch data to present dependencies in the output. To achieve that we can use @Join annotation on the repository interface with specifying JOIN FETCH. Since the @Join annotation is repeatable it can be specified multiple times for different associations as shown below.

@Repository
public interface OrganizationRepository extends CrudRepository<Organization, Long> {

    @Join(value = "departments", type = Join.Type.LEFT_FETCH)
    @Join(value = "employees", type = Join.Type.LEFT_FETCH)
    Optional<Organization> findByName(String name);

}

4. Batch operations

Micronaut Predator repositories support batch operations. It can be sometimes useful, for example in automatic tests. Here’s my simple JUnit test that adds multiple employees into a single department inside an organization:

@Test
public void addMultiple() {
   List<Employee> employees = Arrays.asList(
      new Employee("Test1", 20, "Developer", 5000),
      new Employee("Test2", 30, "Analyst", 15000),
      new Employee("Test3", 40, "Manager", 25000),
      new Employee("Test4", 25, "Developer", 9000),
      new Employee("Test5", 23, "Analyst", 8000),
      new Employee("Test6", 50, "Developer", 12000),
      new Employee("Test7", 55, "Architect", 25000),
      new Employee("Test8", 43, "Manager", 15000)
   );

   Organization organization = new Organization("TestWithEmployees", "TestAddress");
   Organization organizationSaved = organizationRepository.save(organization);
   Assertions.assertNotNull(organization.getId());
   Department department = new Department("TestWithEmployees");
   department.setOrganization(organization);
   Department departmentSaved = departmentRepository.save(department);
   Assertions.assertNotNull(department.getId());
   employeeRepository.saveAll(employees.stream().map(employee -> {
      employee.setOrganization(organizationSaved);
      employee.setDepartment(departmentSaved);
      return employee;
   }).collect(Collectors.toList()));
}

5. Controllers

Finally, the last implementation step – building REST controllers. OrganizationController is pretty simple. It injects OrganizationRepository and uses it for saving entities and searching their by name. Here’s the implementation:

@Controller("organizations")
public class OrganizationController {

    @Inject
    OrganizationRepository repository;

    @Post("/organization")
    public Long addOrganization(@Body Organization organization) {
        Organization organization1 = repository.save(organization);
        return organization1.getId();
    }

    @Get("/organization/name/{name}")
    public Optional<Organization> findOrganization(@NotNull String name) {
        return repository.findByName(name);
    }

}

EmployeeController is a little bit more complicated. We have an implementation that exposes four additional find methods defined in EmployeeRepository. There is also a method for adding a new employee and assigning it to the department, and changing the job implemented inside SampleService bean.

@Controller("employees")
public class EmployeeController {

    @Inject
    EmployeeRepository repository;
    @Inject
    OrganizationRepository organizationRepository;
    @Inject
    SampleService service;

    @Get("/salary/{salary}")
    public Set<EmployeeDTO> findEmployeesBySalary(int salary) {
        return repository.findBySalaryGreaterThan(salary);
    }

    @Get("/organization/{organizationId}")
    public Set<EmployeeDTO> findEmployeesByOrganization(Long organizationId) {
        Optional<Organization> organization = organizationRepository.findById(organizationId);
        return repository.findByOrganization(organization.get());
    }

    @Get("/salary-avg/age/{age}")
    public int findAvgSalaryByAge(int age) {
        return repository.findAvgSalaryByAge(age);
    }

    @Get("/salary-avg/organization/{organizationId}")
    public int findAvgSalaryByAge(Long organizationId) {
        Optional<Organization> organization = organizationRepository.findById(organizationId);
        return repository.findAvgSalaryByOrganization(organization.get());
    }

    @Post("/{departmentId}")
    public void addNewEmployee(@Body Employee employee, Long departmentId) {
        service.hireEmployee(employee, departmentId);
    }

    @Put("/change-job")
    public void changeJob(@Body ChangeJobRequest request) {
        service.changeJob(request.getEmployeeId(), request.getTargetOrganizationId());
    }

}

6. Configuring database connection

As usual we use Docker image for running database instances locally. Here’s the command that runs container with Postgres and expose it on port 5432:

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=predator -e POSTGRES_PASSWORD=predator123 -e POSTGRES_DB=predator postgres

After startup my Postgres instance is available on the virtual address 192.168.99.100, so I have to set it in the Micronaut application.yml. Besides database connection settings we will also set some JPA properties, that enable SQL logging and automatically applies model changes into a database schema. Here’s full configuration of our sample application inside application.yml:

micronaut:
  application:
    name: sample-micronaut-jpa

jackson:
  bean-introspection-module: true

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/predator?ssl=false
    driverClassName: org.postgresql.Driver
    username: predator
    password: predator123

jpa:
  default:
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Conclusion

The support for ORM was one of the most expected features for the Micronaut Framework. Not only it will be available in the release version soon, but it is almost 1.5x faster than Spring Data JPA – following this article https://objectcomputing.com/news/2019/07/18/unleashing-predator-precomputed-data-repositories created by the leader of Micronaut Project Graeme Rocher. In my opinion, the support for ORM via project Predator may be the reason that developers decide to use Micronaut instead of Spring Boot.
In this article, I have demonstrated the most interesting features of Micronaut Data JPA. I think that it will be continuously improved, and we see some new useful features soon. The sample application source code snippet is, as usual, available on GitHub: https://github.com/piomin/sample-micronaut-jpa.git. Before starting with Micronaut Data it is worth reading about the basics: Micronaut Tutorial: Beans and scopes.

The post JPA Data Access with Micronaut Data appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/feed/ 0 7196
Kotlin Microservices with Micronaut, Spring Cloud and JPA https://piotrminkowski.com/2019/02/19/kotlin-microservices-with-micronaut-spring-cloud-and-jpa/ https://piotrminkowski.com/2019/02/19/kotlin-microservices-with-micronaut-spring-cloud-and-jpa/#respond Tue, 19 Feb 2019 16:10:53 +0000 https://piotrminkowski.wordpress.com/?p=7010 Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery, and client-side load balancing. These features allow you to include your application built on top of Micronaut into the existing Kotlin microservices-based system. The most popular example of such an approach […]

The post Kotlin Microservices with Micronaut, Spring Cloud and JPA appeared first on Piotr's TechBlog.

]]>
Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery, and client-side load balancing. These features allow you to include your application built on top of Micronaut into the existing Kotlin microservices-based system.

The most popular example of such an approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception).

In this article you will learn how to:

  • Configure Micronaut Maven support for Kotlin using Kapt compiler
  • Implement microservices with Micronaut and Kotlin
  • Integrate Micronaut with Spring Cloud Eureka discovery server
  • Integrate Micronaut with Spring Cloud Config server
  • Configure JPA/Hibernate support for application built on top Micronaut
  • For simplification we run a single instance of PostgreSQL shared between all sample microservices

Our architecture is pretty similar to the architecture described in my previous article about Micronaut Quick Guide to Microservice with Micronaut Framework. We also have three microservice that communicate to each other. We use Spring Cloud Eureka and Spring Cloud Config for discovery and distributed configuration instead of Consul. Every service has a backend store – PostgreSQL database. This architecture has been visualized in the following picture.

kotlin-micronaut-microservices-arch.png

After that short introduction we may proceed to the development. Let’s begin from configuring Kotlin support for Micronaut.

1. Kotlin with Micronaut microservices – configuration

Support for Kotlin with Kapt compiler plugin is described well on Micronaut docs site (https://docs.micronaut.io/1.1.0.M1/guide/index.html#kotlin). However, I decided to use Maven instead of Gradle, so our configuration will be slightly different than instructions for Gradle. We configure Kapt inside the Maven plugin for Kotlin kotlin-maven-plugin. Thanks to that Kapt will create Java “stub” classes for each of your Kotlin classes, which can then be processed by Micronaut’s Java annotation processor. The Micronaut annotation processors are declared inside tag annotationProcessorPaths in the configuration section. Here’s the full Maven configuration to provide support for Kotlin. Besides core library micronaut-inject-java, we also use annotations from tracing, openapi and JPA libraries.

<plugin>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-maven-plugin</artifactId>
   <dependencies>
      <dependency>
         <groupId>org.jetbrains.kotlin</groupId>
         <artifactId>kotlin-maven-allopen</artifactId>
         <version>${kotlin.version}</version>
      </dependency>
   </dependencies>
   <configuration>
      <jvmTarget>1.8</jvmTarget>
   </configuration>
   <executions>
      <execution>
         <id>compile</id>
         <phase>compile</phase>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
      <execution>
         <id>test-compile</id>
         <phase>test-compile</phase>
         <goals>
            <goal>test-compile</goal>
         </goals>
      </execution>
      <execution>
         <id>kapt</id>
         <goals>
            <goal>kapt</goal>
         </goals>
         <configuration>
            <sourceDirs>
               <sourceDir>src/main/kotlin</sourceDir>
            </sourceDirs>
            <annotationProcessorPaths>
               <annotationProcessorPath>
                  <groupId>io.micronaut</groupId>
                  <artifactId>micronaut-inject-java</artifactId>
                  <version>${micronaut.version}</version>
               </annotationProcessorPath>
               <annotationProcessorPath>
                  <groupId>io.micronaut.configuration</groupId>
                  <artifactId>micronaut-openapi</artifactId>
                  <version>${micronaut.version}</version>
               </annotationProcessorPath>
               <annotationProcessorPath>
                  <groupId>io.micronaut</groupId>
                  <artifactId>micronaut-tracing</artifactId>
                  <version>${micronaut.version}</version>
               </annotationProcessorPath>
               <annotationProcessorPath>
                  <groupId>javax.persistence</groupId>
                  <artifactId>javax.persistence-api</artifactId>
                  <version>2.2</version>
               </annotationProcessorPath>
               <annotationProcessorPath>
                  <groupId>io.micronaut.configuration</groupId>
                  <artifactId>micronaut-hibernate-jpa</artifactId>
                  <version>1.1.0.RC2</version>
               </annotationProcessorPath>
            </annotationProcessorPaths>
         </configuration>
      </execution>
   </executions>
</plugin>

We also should not run maven-compiler-plugin during the compilation phase. Kapt compiler generates Java classes, so we don’t need to run any other compilator during the build.

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <configuration>
      <proc>none</proc>
      <source>1.8</source>
      <target>1.8</target>
   </configuration>
   <executions>
      <execution>
         <id>default-compile</id>
         <phase>none</phase>
      </execution>
      <execution>
         <id>default-testCompile</id>
         <phase>none</phase>
      </execution>
      <execution>
         <id>java-compile</id>
         <phase>compile</phase>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
      <execution>
         <id>java-test-compile</id>
         <phase>test-compile</phase>
         <goals>
            <goal>testCompile</goal>
         </goals>
      </execution>
   </executions>
</plugin>

Finally, we will add Kotlin core library and Jackson module for JSON serialization.


<dependency>
   <groupId>com.fasterxml.jackson.module</groupId>
   <artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib-jdk8</artifactId>
   <version>${kotlin.version}</version>
</dependency>

If you are running the application with Intellij you should first enable annotation processing. To do that go to Build, Execution, Deployment -> Compiler -> Annotation Processors as shown below.

kotlin-micronaut-microservices-2-1

2. Running Postgres

Before proceeding to the development we have to start an instance of PostgreSQL database. It will be started as a Docker container. For me, PostgreSQL is now available under address 192.168.99.100:5432, because I’m using Docker Toolbox.

$ docker run -d --name postgres -e POSTGRES_USER=micronaut -e POSTGRES_PASSWORD=123456 -e POSTGRES_DB=micronaut -p 5432:5432 postgres

3. Enabling Hibernate for Micronaut

Hibernate configuration is a little harder for Micronaut than for Spring Boot. We don’t have any projects like Spring Data JPA, where almost all are auto-configured. Besides a specific JDBC driver for integration with the database, we have to include the following dependencies. We may choose between three available libraries providing datasource implementation: Tomcat, Hikari or DBCP.

<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <version>42.2.5</version>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-jdbc-hikari</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-hibernate-jpa</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

The next step is to provide some configuration settings. All the properties will be stored on the configuration server. We have to set database connection settings and credentials. The JPA configuration settings are provided under jpa.* key. We force Hibernate to update the database on application startup and print all the SQL logs (only for tests).

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Here’s our sample domain object.

@Entity
data class Department(@Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq") @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq") var id: Long,
                      var organizationId: Long, var name: String) {

    @Transient
    var employees: MutableList<Employee> = mutableListOf()

}

The repository bean needs to inject EntityManager using @PersistentContext and @CurrentSession annotations. All functions need to be annotated with @Transactional, what requires the methods not to be final (open modifier in Kotlin).

@Singleton
open class DepartmentRepository(@param:CurrentSession @field:PersistenceContext val entityManager: EntityManager) {

    @Transactional
    open fun add(department: Department): Department {
        entityManager.persist(department)
        return department
    }

    @Transactional(readOnly = true)
    open fun findById(id: Long): Department = entityManager.find(Department::class.java, id)

    @Transactional(readOnly = true)
    open fun findAll(): List<Department> = entityManager.createQuery("SELECT d FROM Department d").resultList as List<Department>

    @Transactional(readOnly = true)
    open fun findByOrganization(organizationId: Long) = entityManager.createQuery("SELECT d FROM Department d WHERE d.organizationId = :orgId")
            .setParameter("orgId", organizationId)
            .resultList as List<Department>

}

4. Running Spring Cloud Config Server

Running Spring Cloud Config server is very simple. I have already described that in some of my previous articles. All those were prepared for Java, while today we start it as a Kotlin application. Here’s our main class. It should be annotated with @EnableConfigServer.

@SpringBootApplication
@EnableConfigServer
class ConfigApplication

fun main(args: Array<String>) {
    runApplication<ConfigApplication>(*args)
}

Besides Kotlin core dependency we need to include artifact spring-cloud-config-server.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib-jdk8</artifactId>
   <version>${kotlin.version}</version>
</dependency>

By default, the config server tries to use Git as properties source backend. We prefer using classpath resources, what’s much simpler for our tests. To do that, we have to enable native profile. We will also set server port to 8888.

spring:
  application:
    name: config-service
  profiles:
    active: native
server:
  port: 8888

If you place all configuration under directory /src/main/resources/config they will be automatically load after start.

kotlin-micronaut-microservices-2-2

Here’s a configuration file for department-service.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/micronaut?ssl=false
    username: micronaut
    password: 123456
    driverClassName: org.postgresql.Driver
jpa:
  default:
    packages-to-scan:
      - 'pl.piomin.services.department.model'
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true
endpoints:
  info:
    enabled: true
    sensitive: false
eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

5. Running Eureka Server

Eureka server will also be run as a Spring Boot application written in Kotlin.

@SpringBootApplication
@EnableEurekaServer
class DiscoveryApplication

fun main(args: Array<String>) {
    runApplication<DiscoveryApplication>(*args)
}

We also need to include a single dependency spring-cloud-starter-netflix-eureka-server besides kotlin-stdlib-jdk8.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib-jdk8</artifactId>
   <version>${kotlin.version}</version>
</dependency>

We run standalone instance of Eureka on port 8761.

spring:
  application:
    name: discovery-service
server:
  port: 8761
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

6. Integrating Micronaut microservices with Spring Cloud

The implementation of a distributed configuration client is automatically included to Micronaut core. We only need to include a module for service discovery.

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-discovery-client</artifactId>
</dependency>

We don’t have to place anything in the source code. All the features can be enabled via configuration settings. First, we need to enable a config client by setting property micronaut.config-client.enabled to true. The next step is to enable specific implementation of config client – in that case Spring Cloud Config, and then set target url.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
spring:
  cloud:
    config:
      enabled: true
      uri: http://localhost:8888/

Each application fetches properties from the configuration server. The part of configuration responsible for enabling discovery based on the Eureka server is visible below.

eureka:
  client:
    registration:
      enabled: true
    defaultZone: "localhost:8761"

7. Running Micronaut Kotlin applications

Kapt needs to be able to compile Kotlin code to Java successfully. That’s why we place the method inside class declaration, and annotate it with @JvmStatic. The main class visible below is also annotated with @OpenAPIDefinition in order to generate Swagger definition for API methods.

@OpenAPIDefinition(
        info = Info(
                title = "Departments Management",
                version = "1.0",
                description = "Department API",
                contact = Contact(url = "https://piotrminkowski.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
        )
)
open class DepartmentApplication {

    companion object {
        @JvmStatic
        fun main(args: Array<String>) {
            Micronaut.run(DepartmentApplication::class.java)
        }
    }
   
}

Here’s the controller class from department-service. It injects a repository bean for database integration and EmployeeClient for HTTP communication with employee-service.

@Controller("/departments")
open class DepartmentController(private val logger: Logger = LoggerFactory.getLogger(DepartmentController::class.java)) {

    @Inject
    lateinit var repository: DepartmentRepository
    @Inject
    lateinit var employeeClient: EmployeeClient

    @Post
    fun add(@Body department: Department): Department {
        logger.info("Department add: {}", department)
        return repository.add(department)
    }

    @Get("/{id}")
    fun findById(id: Long): Department? {
        logger.info("Department find: id={}", id)
        return repository.findById(id)
    }

    @Get
    fun findAll(): List<Department> {
        logger.info("Department find")
        return repository.findAll()
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    open fun findByOrganization(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        return repository.findByOrganization(organizationId)
    }

    @Get("/organization/{organizationId}/with-employees")
    @ContinueSpan
    open fun findByOrganizationWithEmployees(@SpanTag("organizationId") organizationId: Long): List<Department> {
        logger.info("Department find: organizationId={}", organizationId)
        val departments = repository.findByOrganization(organizationId)
        departments.forEach { it.employees = employeeClient.findByDepartment(it.id) }
        return departments
    }

}

It is worth taking a look at HTTP client implementation. It has been discussed in the details in my last article about Micronaut Quick Guide to Microservice with Micronaut Framework.

@Client(id = "employee-service", path = "/employees")
interface EmployeeClient {

   @Get("/department/{departmentId}")
   fun findByDepartment(departmentId: Long): MutableList<Employee>
   
}

You can run all the microservice using IntelliJ. You may also build the whole project with Maven using mvn clean install command, and then run them using java -jar command. Thanks to maven-shade-plugin applications will be generated as uber jars. Then run them in the following order: config-service, discovery-service and microservices.


$ java -jar config-service/target/config-service-1.0-SNAPSHOT.jar
$ java -jar discovery-service/target/discovery-service-1.0-SNAPSHOT.jar
$ java -jar employee-service/target/employee-service-1.0-SNAPSHOT.jar
$ java -jar department-service/target/department-service-1.0-SNAPSHOT.jar
$ java -jar organization-service/target/organization-service-1.0-SNAPSHOT.jar

After you may take a look at the Eureka dashboard available under address http://localhost:8761 to see the list of running services. You may also perform some tests by running HTTP API methods.

micronaut-2-3

Summary

The sample applications source code is available on GitHub in the repository sample-micronaut-microservices in the branch kotlin. You can refer to that repository for more implementation details that has not been included in the article.

The post Kotlin Microservices with Micronaut, Spring Cloud and JPA appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/02/19/kotlin-microservices-with-micronaut-spring-cloud-and-jpa/feed/ 0 7010