Hibernate Archives - Piotr's TechBlog https://piotrminkowski.com/tag/hibernate/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 13 Jul 2021 12:58:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Hibernate Archives - Piotr's TechBlog https://piotrminkowski.com/tag/hibernate/ 32 32 181738725 Express JPA Queries as Java Streams https://piotrminkowski.com/2021/07/13/express-jpa-queries-as-java-streams/ https://piotrminkowski.com/2021/07/13/express-jpa-queries-as-java-streams/#comments Tue, 13 Jul 2021 11:00:11 +0000 https://piotrminkowski.com/?p=9949 In this article, you will learn how to use the JPAstreamer library to express your JPA queries with Java streams. I will also show you how to integrate this library with Spring Boot and Spring Data. The idea around it is very simple but at the same time brilliant. The library creates a SQL query […]

The post Express JPA Queries as Java Streams appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the JPAstreamer library to express your JPA queries with Java streams. I will also show you how to integrate this library with Spring Boot and Spring Data. The idea around it is very simple but at the same time brilliant. The library creates a SQL query based on your Java stream. That’s all. I have already mentioned this library on my Twitter account.

jpa-java-streams-twitter

Before we start, let’s take a look at the following picture. It should explain the concept in a simple way. That’s pretty intuitive, right?

jpa-java-streams-table

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions. Let’s begin.

Dependencies and configuration

As an example, we have a simple Spring Boot application that runs an embedded H2 database and exposes data through a REST API. It also uses Spring Data JPA to interact with the database. But with the JPAstreamer library, this is completely transparent for us. So, in the first step, we need to include the following two dependencies. The first of them adds JPAstreamer while the second integrate it with Spring Boot.

<dependency>
  <groupId>com.speedment.jpastreamer</groupId>
  <artifactId>jpastreamer-core</artifactId>
  <version>1.0.1</version>
</dependency>
<dependency>
  <groupId>com.speedment.jpastreamer.integration.spring</groupId>
  <artifactId>spring-boot-jpastreamer-autoconfigure</artifactId>
  <version>1.0.1</version>
</dependency>

Then, we need to add Spring Boot Web and JPA starters, H2 database, and optionally Lombok.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>com.h2database</groupId>
  <artifactId>h2</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>org.projectlombok</groupId>
  <artifactId>lombok</artifactId>
  <version>1.18.20</version>
</dependency>

I’m using Java 15 for compilation. Because I use Java records for DTO I need to enable preview features. Here’s the plugin responsible for it.

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.8.1</version>
  <configuration>
    <release>15</release>
    <compilerArgs>
        --enable-preview
    </compilerArgs>
    <source>15</source>
    <target>15</target>
  </configuration>
</plugin>

The JPAstreamer library generates source code based on your entity model. Then we may use it, for example, to perform filtering or sorting. But we will talk about it in the next part of the article. For now, let’s configure the build process with build-helper-maven-plugin. It generates the source code in the target/generated-sources/annotations directory. If you use IntelliJ it is automatically included as a source folder in your project.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>3.2.0</version>
  <executions>
    <execution>
      <phase>generate-sources</phase>
      <goals>
        <goal>add-source</goal>
      </goals>
      <configuration>
        <sources>
          <source>${project.build.directory}/generated-sources/annotations</source>
        </sources>
      </configuration>
    </execution>
  </executions>
</plugin>

Here is a source code generated by JPAstreamer for our entity model.

Entity model for JPA

Let’s take a look at our example entities. Here’s the Employee class. Each employee is assigned to the department and organization.

@Entity
@NoArgsConstructor
@Getter
@Setter
@ToString
public class Employee {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   private String position;
   private int salary;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Here’s the Department entity.

@Entity
@NoArgsConstructor
@Getter
@Setter
public class Department {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Here’s the Organization entity.

@Entity
@NoArgsConstructor
@Getter
@Setter
public class Organization {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "organization")
   private Set<Department> departments;
   @OneToMany(mappedBy = "organization")
   private Set<Employee> employees;
}

We will also use Java records to create DTOs. Here’s a simple DTO for the Employee entity.

public record EmployeeDTO(
   Integer id,
   String name,
   String position,
   int salary
) {
   public EmployeeDTO(Employee emp) {
      this(emp.getId(), emp.getName(), emp.getPosition(), emp.getSalary());
   }
}

We also have a DTO record to express relationship fields.

public record EmployeeWithDetailsDTO(
   Integer id,
   String name,
   String position,
   int salary,
   String organizationName,
   String departmentName
) {
   public EmployeeWithDetailsDTO(Employee emp) {
      this(emp.getId(), emp.getName(), emp.getPosition(), emp.getSalary(),
            emp.getOrganization().getName(),
            emp.getDepartment().getName());
   }
}

Express JPA queries as Java streams

Let’s begin with a simple example. We would like to get all the departments, sort it ascending based on the name field, and then convert it to DTO. We just need to get an instance of JPAstreamer object and invoke a stream() method. Then you do everything else as you would act with standard Java streams.

@GetMapping
public List<DepartmentDTO> findAll() {
   return streamer.stream(Department.class)
        .sorted(Department$.name)
        .map(DepartmentDTO::new)
        .collect(Collectors.toList());
}

Now, we can call the endpoint after starting our Spring Boot application.

$ curl http://localhost:8080/departments
[{"id":4,"name":"aaa"},{"id":3,"name":"bbb"},{"id":2,"name":"ccc"},{"id":1,"name":"ddd"}]

Let’s take a look at something a little bit more advanced. We are going to find employees with salaries greater than an input value, sort them by salaries, and of course map to DTO.

@GetMapping("/greater-than/{salary}")
public List<EmployeeDTO> findBySalaryGreaterThan(@PathVariable("salary") int salary) {
   return streamer.stream(Employee.class)
         .filter(Employee$.salary.greaterThan(salary))
         .sorted(Employee$.salary)
         .map(EmployeeDTO::new)
         .collect(Collectors.toList());
}

Then, we call another endpoint once again.

$ curl http://localhost:8080/employees/greater-than/25000    
[{"id":5,"name":"Test5","position":"Architect","salary":30000},{"id":7,"name":"Test7","position":"Manager","salary":30000},{"id":9,"name":"Test9","position":"Developer","salary":30000}]

We can also perform JPA pagination operations by using skip and limit Java streams methods.

@GetMapping("/offset/{offset}/limit/{limit}")
public List<EmployeeDTO> findAllWithPagination(
      @PathVariable("offset") int offset, 
      @PathVariable("limit") int limit) {
   return streamer.stream(Employee.class)
         .skip(offset)
         .limit(limit)
         .map(EmployeeDTO::new)
         .collect(Collectors.toList());
}

What is important all such operations are performed on the database side. Here’s the SQL query generated for the implementation visible above.

What about relationships between entities? Of course relationships between tables are handled via the JPA provider. In order to perform the JOIN operation with JPAstreamer, we just need to specify the joining stream. By default, it is LEFT JOIN, but we can customize it when calling the joining() method. In the following fragment of code, we join Department and Organization, which are in @ManyToOne relationship with the Employee entity.

@GetMapping("/{id}")
public EmployeeWithDetailsDTO findById(@PathVariable("id") Integer id) {
   return streamer.stream(of(Employee.class)
           .joining(Employee$.department)
           .joining(Employee$.organization))
        .filter(Employee$.id.equal(id))
        .map(EmployeeWithDetailsDTO::new)
        .findFirst()
        .orElseThrow();
}

Of course, we can call many other Java stream methods. In the following fragment of code, we count the number of employees assigned to the particular department.

@GetMapping("/{id}/count-employees")
public long getNumberOfEmployees(@PathVariable("id") Integer id) {
   return streamer.stream(Department.class)
         .filter(Department$.id.equal(id))
         .map(Department::getEmployees)
         .mapToLong(Set::size)
         .sum();
}

And the last example today. We get all the employees assigned to a particular department and map each of them to EmployeeDTO.

@GetMapping("/{id}/employees")
public List<EmployeeDTO> getEmployees(@PathVariable("id") Integer id) {
   return streamer.stream(Department.class)
         .filter(Department$.id.equal(id))
         .map(Department::getEmployees)
         .flatMap(Set::stream)
         .map(EmployeeDTO::new)
         .collect(Collectors.toList());
}

Integration with Spring Boot

We can easily integrate JPAstreamer with Spring Boot and Spring Data JPA. In fact, you don’t have anything more than just include a dependency responsible for integration with Spring. It provides auto-configuration also for Spring Data JPA. Therefore, we just need to inject the JPAStreamer bean into the target service or controller.

@RestController
@RequestMapping("/employees")
public class EmployeeController {

   private final JPAStreamer streamer;

   public EmployeeController(JPAStreamer streamer) {
      this.streamer = streamer;
   }

   @GetMapping("/greater-than/{salary}")
   public List<EmployeeDTO> findBySalaryGreaterThan(
      @PathVariable("salary") int salary) {
   return streamer.stream(Employee.class)
        .filter(Employee$.salary.greaterThan(salary))
        .sorted(Employee$.salary)
        .map(EmployeeDTO::new)
        .collect(Collectors.toList());
   }

   // ...

}

Final Thoughts

The concept around the JPAstreamer library seems to be very interesting. I really like it. The only disadvantage I found is that it sends certain data back to Speedment’s servers for Google Analytics. If you wish to disable this feature, you need to contact their team. To be honest, I’m concerned a little. But it doesn’t change my point of view, that JPAstreamer is a very useful library. If you are interested in topics related to Java streams and collections you may read my article Using Eclipse Collections.

The post Express JPA Queries as Java Streams appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/13/express-jpa-queries-as-java-streams/feed/ 21 9949
An Advanced GraphQL with Quarkus https://piotrminkowski.com/2021/04/14/advanced-graphql-with-quarkus/ https://piotrminkowski.com/2021/04/14/advanced-graphql-with-quarkus/#comments Wed, 14 Apr 2021 09:46:25 +0000 https://piotrminkowski.com/?p=9672 In this article, you will learn how to create a GraphQL application using the Quarkus framework. Our application will connect to a database, and we will use the Quarkus Panache module as the ORM provider. On the other hand, Quarkus GraphQL support is built on top of the SmallRye GraphQL library. We will discuss some […]

The post An Advanced GraphQL with Quarkus appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create a GraphQL application using the Quarkus framework. Our application will connect to a database, and we will use the Quarkus Panache module as the ORM provider. On the other hand, Quarkus GraphQL support is built on top of the SmallRye GraphQL library. We will discuss some more advanced GraphQL and JPA topics like dynamic filtering or relations fetching.

As an example, I will use the same application as in my previous article about Spring Boot GraphQL support. We will migrate it to Quarkus. Instead of the Netflix DGS library, we will use the already mentioned SmallRye GraphQL module. The next important challenge is to replace the ORM layer based on Spring Data with Quarkus Panache. If you would like to know more about GraphQL on Spring Boot read my article An Advanced GraphQL with Spring Boot and Netflix DGS.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that go to the sample-app-graphql directory. Then you should just follow my instructions.

We use the same schema and entity model as in my previous article about Spring Boot and GraphQL. Our application exposes GraphQL API and connects to H2 in-memory database. There are three entities EmployeeDepartment and Organization – each of them stored in the separated table. Let’s take a look at a visualization of relations between them.

quarkus-graphql-entities

1. Dependencies for Quarkus GraphQL

Let’s start with dependencies. We need to include SmallRye GraphQL, Quarkus Panache, and the io.quarkus:quarkus-jdbc-h2 artifact for running an in-memory database with our application. In order to generate getters and setters, we can include the Lombok library. However, we can also take an advantage of the Quarkus auto-generation support. After extending entity class with PanacheEntityBase Quarkus will also generate getters and setters. We may even extend PanacheEntity to use the default id.

<dependencies>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-smallrye-graphql</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-jdbc-h2</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-hibernate-orm-panache</artifactId>
    </dependency>
    <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
      <version>1.18.16</version>
    </dependency>
</dependencies>

2. Domain Model for GraphQL and Hibernate

In short, Quarkus simplifies the creation of GraphQL APIs. We don’t have to manually define any schemas. The only thing we need to do is create a domain model and use some annotations. First things first – our domain model. To clarify, I’m using the same classes for ORM and API. Of course, we should create DTO objects to expose data as a GraphQL API, but I want to simplify our example implementation as much as I can. Here’s the Employee entity class.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String firstName;
   private String lastName;
   private String position;
   private int salary;
   private int age;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Also, let’s take a look at the Department entity.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Besides entities, we also have input parameters used in the mutations. However, the input objects are much simpler than outputs. Just to compare, here’s the DepartmentInput class.

@Data
@NoArgsConstructor
public class DepartmentInput {
   private String name;
   private Integer organizationId;
}

3. GraphQL Filtering with Quarkus

In this section, we will create a dynamic filter in GraphQL API. Our sample filter allows defining criteria for three different Employee fields: salary, age and position. We may set a single field, two of them or all. Each condition is used with the AND relation to other conditions. The class with a filter implementation is visible below. It consists of several fields represented by the FilterField objects.

@Data
public class EmployeeFilter {
   private FilterField salary;
   private FilterField age;
   private FilterField position;
}

Then, let’s take a look at the FilterField implementation. It has two parameters: operator and value. Basing on the values of these parameters we are generating a JPA Criteria Predicate. I’m just generating the most common conditions for comparison between two numbers or strings.

@Data
public class FilterField {
   private String operator;
   private String value;

   public Predicate generateCriteria(CriteriaBuilder builder, Path field) {
      try {
         int v = Integer.parseInt(value);
         switch (operator) {
         case "lt": return builder.lt(field, v);
         case "le": return builder.le(field, v);
         case "gt": return builder.gt(field, v);
         case "ge": return builder.ge(field, v);
         case "eq": return builder.equal(field, v);
         }
      } catch (NumberFormatException e) {
         switch (operator) {
         case "endsWith": return builder.like(field, "%" + value);
         case "startsWith": return builder.like(field, value + "%");
         case "contains": return builder.like(field, "%" + value + "%");
         case "eq": return builder.equal(field, value);
         }
      }

      return null;
   }
}

After defining model classes we may proceed to the repository implementation. We will use PanacheRepository for that. The idea behind that is quite similar to the Spring Data Repositories. However, we don’t have anything similar to the Spring Data Specification interface which can be used to execute JPA criteria queries. Since we need to build a query basing on dynamic criteria, it would be helpful. Assuming that, we need to inject EntityManager into the repository class and use it directly to obtain JPA CriteriaBuilder. Finally, we are executing a query with criteria and returning a list of employees matching input conditions.

@ApplicationScoped
public class EmployeeRepository implements PanacheRepository<Employee> {

   private EntityManager em;

   public EmployeeRepository(EntityManager em) {
      this.em = em;
   }

   public List<Employee> findByCriteria(EmployeeFilter filter) {
      CriteriaBuilder builder = em.getCriteriaBuilder();
      CriteriaQuery<Employee> criteriaQuery = builder.createQuery(Employee.class);
      Root<Employee> root = criteriaQuery.from(Employee.class);
      Predicate predicate = null;
      if (filter.getSalary() != null)
         predicate = filter.getSalary().generateCriteria(builder, root.get("salary"));
      if (filter.getAge() != null)
         predicate = (predicate == null ?
            filter.getAge().generateCriteria(builder, root.get("age")) :
            builder.and(predicate, filter.getAge().generateCriteria(builder, root.get("age"))));
      if (filter.getPosition() != null)
         predicate = (predicate == null ? filter.getPosition().generateCriteria(builder, root.get("position")) :
            builder.and(predicate, filter.getPosition().generateCriteria(builder, root.get("position"))));

      if (predicate != null)
         criteriaQuery.where(predicate);

      return em.createQuery(criteriaQuery).getResultList();
   }

}

In the last step in this section, we are creating GraphQL resources. In short, all we need to do is to annotate the class with @GraphQLApi and methods with @Query or @Mutation. If you define any input parameter in the method you should annotate it with @Name. The class EmployeeFetcher is responsible just for defining queries. It uses built-in methods provided by PanacheRepository and our custom search method created inside the EmployeeRepository class.

@GraphQLApi
public class EmployeeFetcher {

   private EmployeeRepository repository;

   public EmployeeFetcher(EmployeeRepository repository){
      this.repository = repository;
   }

   @Query("employees")
   public List<Employee> findAll() {
      return repository.listAll();
   }

   @Query("employee")
   public Employee findById(@Name("id") Long id) {
      return repository.findById(id);
   }

   @Query("employeesWithFilter")
   public List<Employee> findWithFilter(@Name("filter") EmployeeFilter filter) {
      return repository.findByCriteria(filter);
   }

}

4. Fetching Relations with Quarkus GraphQL

As you probably figured out, all the JPA relations are configured in a lazy mode. To fetch them we should explicitly set such a request in our GraphQL query. For example, we may query all departments and fetch organization to each of the departments returned on the list. Let’s analyze the request visible below. It contains field organization related to the @ManyToOne relation between Department and Organization entities.

{
  departments {
    id
    name
    organization {
      id
      name
    }
  }
}

How to handle it on the server-side? Firstly, we need to detect the existence of such a relationship field in our GraphQL query. In order to analyze the input query, we can use DataFetchingEnvironment and DataFetchingFieldSelectionSet objects. Then we need to prepare different JPA queries depending on the parameters set in the GraphQL query. Once again, we will use JPA Criteria for that. The same as before, we place implementation responsible for performing a dynamic join inside the repository bean. Also, to obtain DataFetchingEnvironment we first need to inject GraphQL Context bean. With the following DepartmentRepository implementation, we are avoiding possible N+1 problem, and fetching only the required relation.

@ApplicationScoped
public class DepartmentRepository implements PanacheRepository<Department> {

   private EntityManager em;
   private Context context;

   public DepartmentRepository(EntityManager em, Context context) {
      this.em = em;
      this.context = context;
   }

   public List<Department> findAllByCriteria() {
      CriteriaBuilder builder = em.getCriteriaBuilder();
      CriteriaQuery<Department> criteriaQuery = builder.createQuery(Department.class);
      Root<Department> root = criteriaQuery.from(Department.class);
      DataFetchingEnvironment dfe = context.unwrap(DataFetchingEnvironment.class);
      DataFetchingFieldSelectionSet selectionSet = dfe.getSelectionSet();
      if (selectionSet.contains("employees")) {
         root.fetch("employees", JoinType.LEFT);
      }
      if (selectionSet.contains("organization")) {
         root.fetch("organization", JoinType.LEFT);
      }
      criteriaQuery.select(root).distinct(true);
      return em.createQuery(criteriaQuery).getResultList();
   }

   public Department findByIdWithCriteria(Long id) {
      CriteriaBuilder builder = em.getCriteriaBuilder();
      CriteriaQuery<Department> criteriaQuery = builder.createQuery(Department.class);
      Root<Department> root = criteriaQuery.from(Department.class);
      DataFetchingEnvironment dfe = context.unwrap(DataFetchingEnvironment.class);
      DataFetchingFieldSelectionSet selectionSet = dfe.getSelectionSet();
      if (selectionSet.contains("employees")) {
         root.fetch("employees", JoinType.LEFT);
      }
      if (selectionSet.contains("organization")) {
         root.fetch("organization", JoinType.LEFT);
      }
      criteriaQuery.where(builder.equal(root.get("id"), id));
      return em.createQuery(criteriaQuery).getSingleResult();
   }
}

Finally, we just need to create a resource controller. It uses our custom JPA queries defined in DepartmentRepository.

@GraphQLApi
public class DepartmentFetcher {

   private DepartmentRepository repository;

   DepartmentFetcher(DepartmentRepository repository) {
      this.repository = repository;
   }

   @Query("departments")
   public List<Department> findAll() {
      return repository.findAllByCriteria();
   }

   @Query("department")
   public Department findById(@Name("id") Long id) {
      return repository.findByIdWithCriteria(id);
   }

}

5. Handling GraphQL Mutations with Quarkus

In our sample application, we separate the implementation of queries from mutations. So, let’s take a look at the DepartmentMutation class. Instead of @Query we use @Mutation annotation on the method. We also use the DepartmentInput object as a mutation method parameter.

@GraphQLApi
public class DepartmentMutation {

   private DepartmentRepository departmentRepository;
   private OrganizationRepository organizationRepository;

   DepartmentMutation(DepartmentRepository departmentRepository, 
         OrganizationRepository organizationRepository) {
      this.departmentRepository = departmentRepository;
      this.organizationRepository = organizationRepository;
   }

   @Mutation("newDepartment")
   public Department newDepartment(@Name("input") DepartmentInput departmentInput) {
      Organization organization = organizationRepository
         .findById(departmentInput.getOrganizationId());
      Department department = new Department(null, departmentInput.getName(), null, organization);
      departmentRepository.persist(department);
      return department;
   }

}

6. Testing with GraphiQL

Once we finished the implementation we may build and start our Quarkus application using the following Maven command.

$ mvn package quarkus:dev

After startup, the application is available on 8080 port. We may also take a look at the list of included Quarkus modules.

quarkus-graphql-startup

Quarkus automatically generates a GraphQL schema based on the source code. In order to display it, you should invoke the URL http://localhost:8080/graphql/schema.graphql. Of course, it is an optional step. But something that will be pretty useful for us is the GraphiQL tool. It is embedded into the Quarkus application. It allows us to easily interact with GraphQL APIs and can be accessed from http://localhost:8080/graphql-ui/. First, let’s run the following query that tests a filtering feature.

{
  employeesWithFilter(filter: {
    salary: {
      operator: "gt"
      value: "19000"
    },
    age: {
      operator: "gt"
      value: "30"
    }
  }) {
    id
    firstName
    lastName
    position
  }
}

Here’s the SQL query generated by Hibernate for our GraphQL query.

select
   employee0_.id as id1_1_,
   employee0_.age as age2_1_,
   employee0_.department_id as departme7_1_,
   employee0_.firstName as firstnam3_1_,
   employee0_.lastName as lastname4_1_,
   employee0_.organization_id as organiza8_1_,
   employee0_.position as position5_1_,
   employee0_.salary as salary6_1_ 
from
   Employee employee0_ 
where
   employee0_.salary>19000 
   and employee0_.age>30

Our sample application inserts some test data to the H2 database on startup. So, we just need to execute a query using GraphiQL.

Now, let’s repeat the same exercise to test the join feature. Here’s our GraphQL input query responsible for fetching relation with the Organization entity.

{
  department(id: 5) {
    id
    name
    organization {
      id
      name
    }
  }
}

Hibernate generates the following SQL query for that.

select
   department0_.id as id1_0_0_,
   organizati1_.id as id1_2_1_,
   department0_.name as name2_0_0_,
   department0_.organization_id as organiza3_0_0_,
   organizati1_.name as name2_2_1_ 
from
   Department department0_ 
left outer join
   Organization organizati1_ 
      on department0_.organization_id=organizati1_.id 
where
   department0_.id=5

Once again, let’s view the response for our query using GraphiQL.

Final Thoughts

GraphQL support in Quarkus, like several other features, is based on the SmallRye project. In Spring Boot, we can use third-party libraries that provide GraphQL support. One of them is Netflix DGS. There is also a popular Kickstart GraphQL library described in this article. However, we can’t use any default implementation developed by the Spring Team.

With Quarkus GraphQL support we can easily migrate from Spring Boot to Quarkus. I would not say that currently, Quarkus offers many GraphQL features like for example Netflix DGS, but it is still under active development. We could also easily replace Spring Data with the Quarkus Panache project. The lack of some features similar to the Spring Data Specification, could be easily bypassed by using JPA CriteriaBuilder and EntityManager directly. Finally, I really like the Quarkus GraphQL support, because I don’t have to take care of the GraphQL schema creation, which is generated automatically basing on the source code and annotations.

The post An Advanced GraphQL with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/04/14/advanced-graphql-with-quarkus/feed/ 8 9672
An Advanced GraphQL with Spring Boot and Netflix DGS https://piotrminkowski.com/2021/04/08/an-advanced-graphql-with-spring-boot-and-netflix-dgs/ https://piotrminkowski.com/2021/04/08/an-advanced-graphql-with-spring-boot-and-netflix-dgs/#comments Thu, 08 Apr 2021 08:05:27 +0000 https://piotrminkowski.com/?p=9639 In this article, you will learn how to use the Netflix DGS library to simplify GraphQL development with Spring Boot. We will discuss more advanced topics related to GraphQL and databases, like filtering or relationship fetching. I published a similar article some months ago: An Advanced Guide to GraphQL with Spring Boot. However, it is […]

The post An Advanced GraphQL with Spring Boot and Netflix DGS appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the Netflix DGS library to simplify GraphQL development with Spring Boot. We will discuss more advanced topics related to GraphQL and databases, like filtering or relationship fetching. I published a similar article some months ago: An Advanced Guide to GraphQL with Spring Boot. However, it is based on a different library called GraphQL Java Kickstart (https://github.com/graphql-java-kickstart/graphql-spring-boot). Since Netflix DGS has been released some months ago, you might want to take look at it. So, that’s what we will do now.

Netflix DGS is an annotation-based GraphQL Java library built on top of Spring Boot. Consequently, it is dedicated to Spring Boot applications. Besides the annotation-based programming model, it provides several useful features. Netflix DGS allows generating source code from GraphQL schemas. It simplifies writing unit tests and also supports websockets, file uploads, or GraphQL federation. In order to show you the differences between this library and the previously described Kickstart library, I’ll use the same Spring Boot application as before. Let me just briefly describe our scenario.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

First, you should go to the sample-app-netflix-dgs directory. The example with GraphQL Java Kickstart is available inside the sample-app-kickstart directory.

As I mentioned before, we use the same schema and entity model as before. I created an application that exposes API using GraphQL and connects to H2 in-memory database. We will discuss Spring Boot GraphQL JPA support. For integration with the H2 database, I’m using Spring Data JPA and Hibernate. I have implemented three entities EmployeeDepartment and Organization – each of them stored in the separated table. A relationship model between them is visualized in the picture below.

spring-boot-graphql-netflix-domain

1. Dependencies for Spring Boot and Netflix GraphQL

Let’s start with dependencies. We need to include Spring Web, Spring Data JPA, and the com.database:h2 artifact for running an in-memory database with our application. Of course, we also have to include Netflix DGS Spring Boot Starter. Here’s a list of required dependencies in Maven pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>com.h2database</groupId>
   <artifactId>h2</artifactId>
   <scope>runtime</scope>
</dependency>
<dependency>
   <groupId>org.projectlombok</groupId>
   <artifactId>lombok</artifactId>
</dependency>
<dependency>
   <groupId>com.netflix.graphql.dgs</groupId>
   <artifactId>graphql-dgs-spring-boot-starter</artifactId>
   <version>${netflix-dgs.spring.version}</version>
</dependency>

2. GraphQL schemas

Before we start implementation, we need to create GraphQL schemas with objects, queries, and mutations. A schema may be defined in multiple graphqls files, but all of them have to be placed inside the /src/main/resources/schemas directory. Thanks to that, the Netflix DGS library detects and loads them automatically.

GraphQL schema for each entity is located in the separated file. Let’s take a look at the department.graphqls file. There is the QueryResolver with two find methods and the MutationResolver with a single method for adding new departments. We also have an input object for mutation and a standard type definition for queries.

type QueryResolver {
   departments: [Department]
   department(id: ID!): Department!
}

type MutationResolver {
   newDepartment(department: DepartmentInput!): Department
}

input DepartmentInput {
   name: String!
   organizationId: Int
}

type Department {
   id: ID!
   name: String!
   organization: Organization
   employees: [Employee]
}

Then we may take a look at the organization.graphqls file. It is a little bit more complicated than the previous schema. As you see I’m using the keyword extend on QueryResolver and MutationResolver. That’s because we have several files with GraphQL schemas.

extend type QueryResolver {
  organizations: [Organization]
  organization(id: ID!): Organization!
}

extend type MutationResolver {
  newOrganization(organization: OrganizationInput!): Organization
}

input OrganizationInput {
  name: String!
}

type Organization {
  id: ID!
  name: String!
  employees: [Employee]
  departments: [Department]
}

Finally, the schema for the Employee entity. In contrast to the previous schemas, it has objects responsible for filtering like EmployeeFilter. We also need to define the schema object with mutation and query.

extend type QueryResolver {
  employees: [Employee]
  employeesWithFilter(filter: EmployeeFilter): [Employee]
  employee(id: ID!): Employee!
}

extend type MutationResolver {
  newEmployee(employee: EmployeeInput!): Employee
}

input EmployeeInput {
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  organizationId: Int!
  departmentId: Int!
}

type Employee {
  id: ID!
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  department: Department
  organization: Organization
}

input EmployeeFilter {
  salary: FilterField
  age: FilterField
  position: FilterField
}

input FilterField {
  operator: String!
  value: String!
}

schema {
  query: QueryResolver
  mutation: MutationResolver
}

3. Domain Model for GraphQL and Hibernate

We could have generated Java source code using previously defined GraphQL schemas. However, I prefer to use Lombok annotations, so I will do it manually. Here’s the Employee entity corresponding to the Employee object defined in GraphQL schema.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String firstName;
   private String lastName;
   private String position;
   private int salary;
   private int age;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Also, let’s take a look at the Department entity.

@Entity
@Data
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

The input objects are much simpler. Just to compare, here’s the DepartmentInput class.

@Data
@NoArgsConstructor
public class DepartmentInput {
   private String name;
   private Integer organizationId;
}

4. Using Netflix DGS with Spring Boot

Netflix DGS provides annotation-based support for Spring Boot. Let’s analyze the most interesting features using the example implementation of a query resolver. The EmployeeFetcher is responsible for defining queries related to the Employee object. We should annotate such a class with @DgsComponent (1). We may create our custom context definition to pass data between different methods or even different query resolvers (2). Then, we have to annotate every query method with @DgsData (3). The fields parentType and fields should match the names declared in GraphQL schemas. We defined three queries in the employee.graphqls file, so we have three methods inside EmployeeFetcher. After fetching all employees, we may save them in our custom context object (4), and then reuse them in other methods or resolvers (5).

The last query method findWithFilter performs advanced filtering based on the dynamic list of fields passed in the input (6). To pass an input parameter we should annotate the method argument with @InputArgument.

@DgsComponent // (1)
public class EmployeeFetcher {

   private EmployeeRepository repository;
   private EmployeeContextBuilder contextBuilder; // (2)

   public EmployeeFetcher(EmployeeRepository repository, 
         EmployeeContextBuilder contextBuilder) {
      this.repository = repository;
      this.contextBuilder = contextBuilder;
    }

   @DgsData(parentType = "QueryResolver", field = "employees") // (3)
   public List<Employee> findAll() {
      List<Employee> employees = (List<Employee>) repository.findAll();
      contextBuilder.withEmployees(employees).build(); // (4)
      return employees;
   }

   @DgsData(parentType = "QueryResolver", field = "employee") 
   public Employee findById(@InputArgument("id") Integer id, 
               DataFetchingEnvironment dfe) {
      EmployeeContext employeeContext = DgsContext.getCustomContext(dfe); // (5)
      List<Employee> employees = employeeContext.getEmployees();
      Optional<Employee> employeeOpt = employees.stream()
         .filter(employee -> employee.getId().equals(id)).findFirst();
      return employeeOpt.orElseGet(() -> 
         repository.findById(id)
            .orElseThrow(DgsEntityNotFoundException::new));
   }

   @DgsData(parentType = "QueryResolver", field = "employeesWithFilter")
   public Iterable<Employee> findWithFilter(@InputArgument("filter") EmployeeFilter filter) { // (6)
      Specification<Employee> spec = null;
      if (filter.getSalary() != null)
         spec = bySalary(filter.getSalary());
      if (filter.getAge() != null)
         spec = (spec == null ? byAge(filter.getAge()) : spec.and(byAge(filter.getAge())));
      if (filter.getPosition() != null)
         spec = (spec == null ? byPosition(filter.getPosition()) :
                spec.and(byPosition(filter.getPosition())));
     if (spec != null)  
        return repository.findAll(spec);
     else
        return repository.findAll();
   }

   private Specification<Employee> bySalary(FilterField filterField) {
      return (root, query, builder) -> 
         filterField.generateCriteria(builder, root.get("salary"));
   }

   private Specification<Employee> byAge(FilterField filterField) {
      return (root, query, builder) -> 
         filterField.generateCriteria(builder, root.get("age"));
   }

   private Specification<Employee> byPosition(FilterField filterField) {
      return (root, query, builder) -> 
         filterField.generateCriteria(builder, root.get("position"));
   }
}

Then, we may switch to the DepartmentFetcher class. It shows the example of relationship fetching. We use DataFetchingEnvironment to detect if the input query contains a relationship field (1). In our case, it may be employees or organization. If any of those fields is defined we add the relation to the JOIN statement (2). We implement the same approach for both findById (3) and findAll methods. However, the findById method also uses data stored in the custom context represented by the EmployeeContext bean (4). If the method findAll in EmployeeFetcher has already been invoked, we can fetch employees assigned to the particular department from the context instead of including the relation to the JOIN statement (5).

@DgsComponent
public class DepartmentFetcher {

   private DepartmentRepository repository;

   DepartmentFetcher(DepartmentRepository repository) {
      this.repository = repository;
   }

   @DgsData(parentType = "QueryResolver", field = "departments")
   public Iterable<Department> findAll(DataFetchingEnvironment environment) {
      DataFetchingFieldSelectionSet s = environment.getSelectionSet(); // (1)
      List<Specification<Department>> specifications = new ArrayList<>();
      if (s.contains("employees") && !s.contains("organization")) // (2)
         return repository.findAll(fetchEmployees());
      else if (!s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchOrganization());
      else if (s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchEmployees().and(fetchOrganization()));
      else
         return repository.findAll();
   }

   @DgsData(parentType = "QueryResolver", field = "department")
   public Department findById(@InputArgument("id") Integer id, 
               DataFetchingEnvironment environment) { // (3)
      Specification<Department> spec = byId(id);
      DataFetchingFieldSelectionSet selectionSet = environment.getSelectionSet();
      EmployeeContext employeeContext = DgsContext.getCustomContext(environment); // (4)
      Set<Employee> employees = null;
      if (selectionSet.contains("employees")) {
         if (employeeContext.getEmployees().size() == 0) // (5)
            spec = spec.and(fetchEmployees());
         else
            employees = employeeContext.getEmployees().stream()
               .filter(emp -> emp.getDepartment().getId().equals(id))
               .collect(Collectors.toSet());
      }
      if (selectionSet.contains("organization"))
         spec = spec.and(fetchOrganization());
      Department department = repository
         .findOne(spec).orElseThrow(DgsEntityNotFoundException::new);
      if (employees != null)
         department.setEmployees(employees);
      return department;
   }

   private Specification<Department> fetchOrganization() {
      return (root, query, builder) -> {
         Fetch<Department, Organization> f = root.fetch("organization", JoinType.LEFT);
         Join<Department, Organization> join = (Join<Department, Organization>) f;
         return join.getOn();
      };
   }

   private Specification<Department> fetchEmployees() {
      return (root, query, builder) -> {
         Fetch<Department, Employee> f = root.fetch("employees", JoinType.LEFT);
         Join<Department, Employee> join = (Join<Department, Employee>) f;
         return join.getOn();
      };
   }

   private Specification<Department> byId(Integer id) {
      return (root, query, builder) -> builder.equal(root.get("id"), id);
   }
}

In comparison to the data fetchers implementation of mutation handlers is rather simple. We just need to define a single method for adding new entities. Here’s the implementation of DepartmentMutation.

@DgsComponent
public class DepartmentMutation {

   private DepartmentRepository departmentRepository;
   private OrganizationRepository organizationRepository;

   DepartmentMutation(DepartmentRepository departmentRepository, 
               OrganizationRepository organizationRepository) {
      this.departmentRepository = departmentRepository;
      this.organizationRepository = organizationRepository;
   }

   @DgsData(parentType = "MutationResolver", field = "newDepartment")
   public Department newDepartment(DepartmentInput input) {
      Organization organization = organizationRepository
         .findById(departmentInput.getOrganizationId())
         .orElseThrow();
      return departmentRepository
         .save(new Department(null, input.getName(), null, organization));
   }

}

5. Running Spring Boot application and testing Netflix GraphQL support

The last step in our exercise is to run and test the Spring Boot application. It inserts some test data to the H2 database on startup. So, let’s just use the GraphiQL tool to run test queries. It is automatically included in the application by the Netflix DGS library. We may display it by invoking the URL http://localhost:8080/graphiql.

In the first step, we run the GraphQL query responsible for fetching all employees with departments. The method that handles the query also builds a custom context and stores there all existing employees.

spring-boot-graphql-netflix-query

Then, we may run a query responsible for finding a single department by its id. We will fetch both relations one-to-many with Employee and many-to-one with Organization.

While the Organization entity is fetched using the JOIN statement, Employee is taken from the context. Here’s the SQL query generated for our current scenario.

spring-boot-graphql-netflix-query-next

Finally, we can test our filtering feature. Let’s filter employees using salary and age criteria.

Let’s take a look at the SQL query for the recently called method.

Final Thoughts

Netflix DGS seems to be an interesting alternative to other libraries that provide support for GraphQL with Spring Boot. It has been open-sourced some weeks ago, but it is rather a stable solution. I guess that before releasing it publicly, the Netflix team has tested it in the battle. I like its annotation-based programming style and several other features. This article will help you in starting with Netflix DGS.

The post An Advanced GraphQL with Spring Boot and Netflix DGS appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/04/08/an-advanced-graphql-with-spring-boot-and-netflix-dgs/feed/ 20 9639
Guide to Quarkus with Kotlin https://piotrminkowski.com/2020/08/09/guide-to-quarkus-with-kotlin/ https://piotrminkowski.com/2020/08/09/guide-to-quarkus-with-kotlin/#comments Sun, 09 Aug 2020 08:28:56 +0000 http://piotrminkowski.com/?p=8353 Quarkus is a lightweight Java framework developed by RedHat. It is dedicated for cloud-native applications that require a small memory footprint and a fast startup time. Its programming model is built on top of proven standards like Eclipse MicroProfile. Recently it is growing in popularity. It may be considered as an alternative to Spring Boot […]

The post Guide to Quarkus with Kotlin appeared first on Piotr's TechBlog.

]]>
Quarkus is a lightweight Java framework developed by RedHat. It is dedicated for cloud-native applications that require a small memory footprint and a fast startup time. Its programming model is built on top of proven standards like Eclipse MicroProfile. Recently it is growing in popularity. It may be considered as an alternative to Spring Boot framework, especially if you are running your applications on Kubernetes or OpenShift.
In this guide, you will learn how to implement a simple Quarkus Kotlin application, that exposes REST endpoints and connects to a database. We will discuss the following topics:

  • Implementation of REST endpoints
  • Integration with H2 with Hibernate and Panache project
  • Generating and exposing OpenAPI/Swagger documentation
  • Exposing health checks
  • Exposing basic metrics
  • Logging request and response
  • Testing REST endpoints with RestAssured library

github-logo Source code

The source code with the sample Quarkus Kotlin applications is available on GitHub. First, you need to clone the following repository: https://github.com/piomin/sample-quarkus-applications.git. Then, you need to go to the employee-service directory.

1. Enable Quarkus Kotlin support

To enable Kotlin support in Quarkus we need to include quarkus-kotlin module. We also have to add kotlin-stdlib library.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib</artifactId>
</dependency>

In the next step we need to include kotlin-maven-plugin. Besides standard configuration, we have to use all-open Kotlin compiler plugin. The all-open compiler plugin makes classes annotated with a specific annotation and their members open without the explicit open keyword. Since classes annotated with @Path, @ApplicationScoped, or @QuarkusTest should not be final, we need to add all those annotations to the pluginOptions section.

<build>
   <sourceDirectory>src/main/kotlin</sourceDirectory>
   <testSourceDirectory>src/test/kotlin</testSourceDirectory>
   <plugins>
      <plugin>
         <groupId>io.quarkus</groupId>
         <artifactId>quarkus-maven-plugin</artifactId>
         <version>${quarkus-plugin.version}</version>
         <executions>
            <execution>
               <goals>
                  <goal>build</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
      <plugin>
         <groupId>org.jetbrains.kotlin</groupId>
         <artifactId>kotlin-maven-plugin</artifactId>
         <version>${kotlin.version}</version>
         <executions>
            <execution>
               <id>compile</id>
               <goals>
                  <goal>compile</goal>
               </goals>
            </execution>
            <execution>
               <id>test-compile</id>
               <goals>
                  <goal>test-compile</goal>
               </goals>
            </execution>
         </executions>
         <dependencies>
            <dependency>
               <groupId>org.jetbrains.kotlin</groupId>
               <artifactId>kotlin-maven-allopen</artifactId>
               <version>${kotlin.version}</version>
            </dependency>
         </dependencies>
         <configuration>
            <javaParameters>true</javaParameters>
            <jvmTarget>11</jvmTarget>
            <compilerPlugins>
               <plugin>all-open</plugin>
            </compilerPlugins>
            <pluginOptions>
               <option>all-open:annotation=javax.ws.rs.Path</option>
               <option>all-open:annotation=javax.enterprise.context.ApplicationScoped</option>
               <option>all-open:annotation=io.quarkus.test.junit.QuarkusTest</option>
            </pluginOptions>
         </configuration>
      </plugin>
   </plugins>
</build>

2. Implement REST endpoint

In Quarkus support for REST is built on top of Resteasy and JAX-RS libraries. You can choose between two available extentions for JSON serialization/deserialization: JsonB and Jackson. Since I decided to use Jackson I need to include quarkus-resteasy-jackson dependency. It also includes quarkus-resteasy module.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>

We mostly use JAX-RS annotations for mapping controller methods and fields into HTTP endpoints. We may also use Resteasy annotations like @PathParam, that does not require to set any fields. In order to interact with database, we are injecting a repository bean.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
class EmployeeResource(val repository: EmployeeRepository) {

    @POST
    @Transactional
    fun add(employee: Employee): Response {
        repository.persist(employee)
        return Response.ok(employee).status(201).build()
    }

    @DELETE
    @Path("/{id}")
    @Transactional
    fun delete(@PathParam id: Long) {
        repository.deleteById(id)
    }

    @GET
    fun findAll(): List<Employee> = repository.listAll()

    @GET
    @Path("/{id}")
    fun findById(@PathParam id: Long): Employee? = repository.findById(id)

    @GET
    @Path("/first-name/{firstName}/last-name/{lastName}")
    fun findByFirstNameAndLastName(@PathParam firstName: String, @PathParam lastName: String): List<Employee>
            = repository.findByFirstNameAndLastName(firstName, lastName)

    @GET
    @Path("/salary/{salary}")
    fun findBySalary(@PathParam salary: Int): List<Employee> = repository.findBySalary(salary)

    @GET
    @Path("/salary-greater-than/{salary}")
    fun findBySalaryGreaterThan(@PathParam salary: Int): List<Employee>
            = repository.findBySalaryGreaterThan(salary)

}

3. Integration with database

Quarkus provides Panache JPA extension to simplify work with Hibernate ORM. It also provides driver extensions for the most popular SQL databases like Postgresql, MySQL, or H2. To enable both these features for H2 in-memory database we need to include the following dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-hibernate-orm-panache-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

We should also configure connection settings inside application.properties file.


quarkus.datasource.db-kind=h2
quarkus.datasource.username=sa
quarkus.datasource.password=password
quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

Panache extension allows to use well-known repository pattern. To use it we should first define entity that extends PanacheEntity class.

@Entity
data class Employee(var firstName: String = "",
                    var lastName: String = "",
                    var position: String = "",
                    var salary: Int = 0,
                    var organizationId: Int? = null,
                    var departmentId: Int? = null): PanacheEntity()

In the next step, we are defining repository bean that implements PanacheRepository interface. It comes with some basic methods like persist, deleteById or listAll. We may also use those basic methods to implement more advanced queries or operations.

@ApplicationScoped
class EmployeeRepository: PanacheRepository<Employee> {
    fun findByFirstNameAndLastName(firstName: String, lastName: String): List<Employee> =
           list("firstName = ?1 and lastName = ?2", firstName, lastName)

    fun findBySalary(salary: Int): List<Employee> = list("salary", salary)

    fun findBySalaryGreaterThan(salary: Int): List<Employee> = list("salary > ?1", salary)
}

4. Enable OpenAPI documentation for Quarkus Kotlin

It is possible to generate OpenAPI v3 specification automatically. To do that we need to include SmallRye OpenAPI extension. The specification is available under path /openapi.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>

We may provide some additional informations to the generated OpenAPI specification like description or version number. To do that we need to create application class that extends javax.ws.rs.core.Application, and annotate it with @OpenAPIDefinition, as shown below.

@OpenAPIDefinition(info = Info(title = "Employee API", version = "1.0"))
class EmployeeApplication: Application()

Usually, we want to expose OpenAPI specification using Swagger UI. Such a feature may be enabled using configuration property quarkus.swagger-ui.always-include=true.

quarkus-swagger

5. Health checks

We may expose built-in health checks implementation by including SmallRye Health extension.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

It exposes three REST endpoints compliant with Kubernetes health checks pattern:

  • /health/live – The application is up and running (Kubernetes liveness probe).
  • /health/ready – The application is ready to serve requests (Kubernetes readiness probe).
  • /health – Accumulating all health check procedures in the application.

The default implementation of readiness health check verifies database connection status, while liveness just determines if the application is running.

quarkus-readiness

6. Expose metrics

We may enable metrics collection by adding SmallRye Metrics extension. By default, it collects only JVM, CPU and processes metrics.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-metrics</artifactId>
</dependency>

We may force the library to collect metrics from JAX-RS endpoints. To do that we need to annotate the selected endpoints with @Timed.

@POST
@Transactional
@Timed(name = "add", unit = MetricUnits.MILLISECONDS)
fun add(employee: Employee): Response {
   repository.persist(employee)
   return Response.ok(employee).status(201).build()
}

Now, we may call endpoint POST /employee 100 times in a row. Here’s the list of metrics generated for the single endpoint. If you would like to ensure compatibility with Micrometer metrics format you need to set the following configuration property: quarkus.smallrye-metrics.micrometer.compatibility=true.

quarkus-metrics

7. Logging request and response for Quarkus Kotlin application

There is no built-in mechanism for logging HTTP requests and responses. We may implement custom logging filter that implements interfaces ContainerRequestFilter, and ContainerResponseFilter.

@Provider
class LoggingFilter: ContainerRequestFilter, ContainerResponseFilter {

    private val logger: Logger = LoggerFactory.getLogger(LoggingFilter::class.java)

    @Context
    lateinit var info: UriInfo
    @Context
    lateinit var request: HttpServerRequest

    override fun filter(ctx: ContainerRequestContext) {
        logger.info("Request {} {}", ctx.method, info.path)
    }

    override fun filter(r: ContainerRequestContext, ctx: ContainerResponseContext) {
        logger.info("Response {} {}: {}", r.method, info.path, ctx.status)
    }
    
}

8. Testing

The module quarkus-junit5 is required for testing, as it provides the @QuarkusTest annotation that controls the testing framework. The extension rest-assured is not required, but is a convenient way to test HTTP endpoints.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-junit5</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>io.rest-assured</groupId>
   <artifactId>kotlin-extensions</artifactId>
   <scope>test</scope>
</dependency>

We are adding new Employee in the first test. Then the second test verifies if there is a single Employee stored inside in-memory database.

@QuarkusTest
class EmployeeResourceTest {

    @Test
    fun testAddEmployee() {
        val emp = Employee(firstName = "John", lastName = "Smith", position = "Developer", salary = 20000)
        given().body(emp).contentType(ContentType.JSON)
                .post("/employees")
                .then()
                .statusCode(201)
    }

    @Test
    fun testGetAll() {
        given().get("/employees")
                .then()
                .statusCode(200)
                .assertThat().body("size()", `is`(1))
    }

}

Conclusion

In this guide, I showed you how to build a Quarkus Kotlin application that connects to a database and follows some best practices like exposing health checks, metrics, or logging incoming requests and outgoing responses. The last step is to run our sample application. To do that in development mode we just need to execute command mvn compile quarkus:dev. Here’s my start screen. You can see there, for example, the list of included Quarkus modules.

quarkus-run

If you are interested in Quarkus framework the next useful article for you is Guide to Quarkus on Kubernetes.

The post Guide to Quarkus with Kotlin appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/09/guide-to-quarkus-with-kotlin/feed/ 5 8353
An Advanced Guide to GraphQL with Spring Boot https://piotrminkowski.com/2020/07/31/an-advanced-guide-to-graphql-with-spring-boot/ https://piotrminkowski.com/2020/07/31/an-advanced-guide-to-graphql-with-spring-boot/#comments Fri, 31 Jul 2020 09:31:27 +0000 http://piotrminkowski.com/?p=8220 In this guide I’m going to discuss some more advanced topics related to GraphQL and databases, like filtering or relationship fetching. Of course, before proceeding to the more advanced issues I will take a moment to describe the basics – something you can be found in many other articles. If you already had the opportunity […]

The post An Advanced Guide to GraphQL with Spring Boot appeared first on Piotr's TechBlog.

]]>
In this guide I’m going to discuss some more advanced topics related to GraphQL and databases, like filtering or relationship fetching. Of course, before proceeding to the more advanced issues I will take a moment to describe the basics – something you can be found in many other articles. If you already had the opportunity to familiarize yourself with the concept over GraphQL you may have some questions. Probably one of them is: “Ok. It’s nice. But what if I would like to use GraphQL in the real application that connects to the database and provides API for more advanced queries?”.
If that is your main question, my current article is definitely for you. If you are thinking about using GraphQL in your microservices architecture you may also refer to my previous article GraphQL – The Future of Microservices?.

Example

As you know it is best to learn from examples, so I have created a sample Spring Boot application that exposes API using GraphQL and connects to H2 in-memory database. We will discuss Spring Boot GraphQL JPA support. For integration with the H2 database I’m using Spring Data JPA and Hibernate. I have implemented three entities Employee, Department and Organization – each of them stored in the separated table. A relationship model between them has been visualized in the picture below.

graphql-spring-boot-relations.png

A source code with sample application is available on GitHub in repository: https://github.com/piomin/sample-spring-boot-graphql.git

1. Dependencies

Let’s start from dependencies. Here’s a list of required dependencies for our application. We need to include projects Spring Web, Spring Data JPA and com.database:h2 artifact for embedding in-memory database to our application. I’m also using Spring Boot library offering support for GraphQL. In fact, you may find some other Spring Boot GraphQL JPA libraries, but the one under group com.graphql-java-kickstart (https://www.graphql-java-kickstart.com/spring-boot/) seems to be actively developed and maintained.


<properties>
   <graphql.spring.version>7.1.0</graphql.spring.version>
</properties>
<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-jpa</artifactId>
   </dependency>
   <dependency>
      <groupId>com.h2database</groupId>
      <artifactId>h2</artifactId>
      <scope>runtime</scope>
   </dependency>
   <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
   </dependency>
   <dependency>
      <groupId>com.graphql-java-kickstart</groupId>
      <artifactId>graphql-spring-boot-starter</artifactId>
      <version>${graphql.spring.version}</version>
   </dependency>
   <dependency>
      <groupId>com.graphql-java-kickstart</groupId>
      <artifactId>graphiql-spring-boot-starter</artifactId>
      <version>${graphql.spring.version}</version>
   </dependency>
   <dependency>
      <groupId>com.graphql-java-kickstart</groupId>
      <artifactId>graphql-spring-boot-starter-test</artifactId>
      <version>${graphql.spring.version}</version>
      <scope>test</scope>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
   </dependency>
</dependencies>

2. Schemas

We are starting implementation from defining GraphQL schemas with objects, queries and mutations definitions. The files are located inside /src/main/resources/graphql directory and after adding graphql-spring-boot-starter they are automatically detected by the application basing on their suffix *.graphqls.
GraphQL schema for each entity is located in the separated file. Let’s take a look on department.graphqls. It’s a very trivial definition.

type QueryResolver {
    departments: [Department]
    department(id: ID!): Department!
}

type MutationResolver {
    newDepartment(department: DepartmentInput!): Department
}

input DepartmentInput {
    name: String!
    organizationId: Int
}

type Department {
    id: ID!
    name: String!
    organization: Organization
    employees: [Employee]
}

Here’s the schema inside file organization.graphqls. As you see I’m using keyword extend on QueryResolver and MutationResolver.

extend type QueryResolver {
    organizations: [Organization]
    organization(id: ID!): Organization!
}

extend type MutationResolver {
    newOrganization(organization: OrganizationInput!): Organization
}

input OrganizationInput {
    name: String!
}

type Organization {
    id: ID!
    name: String!
    employees: [Employee]
    departments: [Department]
}

Schema for Employee is a little bit more complicated than two previously demonstrated schemas. I have defined an input object for filtering. It will be discussed in the next section in detail.

extend type QueryResolver {
  employees: [Employee]
  employeesWithFilter(filter: EmployeeFilter): [Employee]
  employee(id: ID!): Employee!
}

extend type MutationResolver {
  newEmployee(employee: EmployeeInput!): Employee
}

input EmployeeInput {
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  organizationId: Int!
  departmentId: Int!
}

type Employee {
  id: ID!
  firstName: String!
  lastName: String!
  position: String!
  salary: Int
  age: Int
  department: Department
  organization: Organization
}

input EmployeeFilter {
  salary: FilterField
  age: FilterField
  position: FilterField
}

input FilterField {
  operator: String!
  value: String!
}

schema {
  query: QueryResolver
  mutation: MutationResolver
}

3. Domain model

Let’s take a look at the corresponding domain model. Here’s Employee entity. Each Employee is assigned to a single Department and Organization.

@Entity
@Data
@NoArgsConstructor
@AllArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Employee {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String firstName;
   private String lastName;
   private String position;
   private int salary;
   private int age;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

Here’s Department entity. It contains a list of employees and a reference to a single organization.


@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Department {
   @Id
   @GeneratedValue
   @EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;
}

And finally Organization entity.

@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
@EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class Organization {
   @Id
   @GeneratedValue
@EqualsAndHashCode.Include
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "organization")
   private Set<Department> departments;
   @OneToMany(mappedBy = "organization")
   private Set<Employee> employees;
}

Entity classes are returned as a result by queries. In mutations we are using input objects that have slightly different implementation. They do not contain reference to a relationship, but only an id of related objects.

@Data
@AllArgsConstructor
@NoArgsConstructor
public class DepartmentInput {
   private String name;
   private Integer organizationId;
}

4. Fetch relations

As you probably figured out, all the JPA relations are configured in lazy mode. To fetch them we should explicitly set such a request in our GraphQL query. For example, we may query all departments and fetch organization to each of the department returned on the list.


{
  departments {
    id
    name
    organization {
      id
      name
    }
  }
}

Now, the question is how to handle it on the server side. The first thing we need to do is to detect the existence of such a relationship field in our GraphQL query. Why? Because we need to avoid possible N+1 problem, which happens when the data access framework executes N additional SQL statements to fetch the same data that could have been retrieved when executing the primary SQL query. So, we need to prepare different JPA queries depending on the parameters set in the GraphQL query. We may do it in several ways, but the most convenient way is by using DataFetchingEnvironment parameter inside QueryResolver implementation.
Let’s take a look on the implementation of QueryResolver for Department. If we annotate class that implements GraphQLQueryResolver with @Component it is automatically detected by Spring Boot (thanks to graphql-spring-boot-starter). Then we are adding DataFetchingEnvironment as a parameter to each query. After that we should invoke method getSelectionSet() on DataFetchingEnvironment object and check if it contains word organization (for fetching Organization) or employees (for fetching list of employees). Depending on requested relations we build different queries. In the following fragment of code we have two methods implemented for DepartmentQueryResolver: findAll and findById.

@Component
public class DepartmentQueryResolver implements GraphQLQueryResolver {

   private DepartmentRepository repository;

   DepartmentQueryResolver(DepartmentRepository repository) {
      this.repository = repository;
   }

   public Iterable<Department> departments(DataFetchingEnvironment environment) {
      DataFetchingFieldSelectionSet s = environment.getSelectionSet();
      List<Specification<Department>> specifications = new ArrayList<>();
      if (s.contains("employees") && !s.contains("organization"))
         return repository.findAll(fetchEmployees());
      else if (!s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchOrganization());
      else if (s.contains("employees") && s.contains("organization"))
         return repository.findAll(fetchEmployees().and(fetchOrganization()));
      else
         return repository.findAll();
   }

   public Department department(Integer id, DataFetchingEnvironment environment) {
      Specification<Department> spec = byId(id);
      DataFetchingFieldSelectionSet selectionSet = environment.getSelectionSet();
      if (selectionSet.contains("employees"))
         spec = spec.and(fetchEmployees());
      if (selectionSet.contains("organization"))
         spec = spec.and(fetchOrganization());
      return repository.findOne(spec).orElseThrow(NoSuchElementException::new);
   }
   
   // REST OF IMPLEMENTATION ...
}

The most convenient way to build dynamic queries is by using JPA Criteria API. To be able to use it with Spring Data JPA we first need to extend our repository interface with JpaSpecificationExecutor interface. After that you may use the additional interface methods that let you execute specifications in a variety of ways. You may choose between findAll and findOne methods.

public interface DepartmentRepository extends CrudRepository<Department, Integer>,
      JpaSpecificationExecutor<Department> {

}

Finally, we may just prepare methods that build Specification the object. This object contains a predicate. In that case we are using three predicates for fetching organization, employees and filtering by id.

private Specification<Department> fetchOrganization() {
   return (Specification<Department>) (root, query, builder) -> {
      Fetch<Department, Organization> f = root.fetch("organization", JoinType.LEFT);
      Join<Department, Organization> join = (Join<Department, Organization>) f;
      return join.getOn();
   };
}

private Specification<Department> fetchEmployees() {
   return (Specification<Department>) (root, query, builder) -> {
      Fetch<Department, Employee> f = root.fetch("employees", JoinType.LEFT);
      Join<Department, Employee> join = (Join<Department, Employee>) f;
      return join.getOn();
   };
}

private Specification<Department> byId(Integer id) {
   return (Specification<Department>) (root, query, builder) -> builder.equal(root.get("id"), id);
}

5. Filtering

For a start, let’s refer to the section 2 – Schemas. Inside employee.graphqls I defined two additional inputs FilterField and EmployeeFilter, and also a single method employeesWithFilter that takes EmployeeFilter as an argument. The FieldFilter class is my custom implementation of a filter for GraphQL queries. It is very trivial. It provides an implementation of two filter types: for number or for string. It generates JPA Criteria Predicate. Of course, instead creating such filter implementation by yourself (like me), you may leverage some existing libraries for that. However, it does not require much time to do it by yourself as you see in the following code. Our custom filter implementation has two parameters: operator and value.

@Data
public class FilterField {
   private String operator;
   private String value;

   public Predicate generateCriteria(CriteriaBuilder builder, Path field) {
      try {
         int v = Integer.parseInt(value);
         switch (operator) {
         case "lt": return builder.lt(field, v);
         case "le": return builder.le(field, v);
         case "gt": return builder.gt(field, v);
         case "ge": return builder.ge(field, v);
         case "eq": return builder.equal(field, v);
         }
      } catch (NumberFormatException e) {
         switch (operator) {
         case "endsWith": return builder.like(field, "%" + value);
         case "startsWith": return builder.like(field, value + "%");
         case "contains": return builder.like(field, "%" + value + "%");
         case "eq": return builder.equal(field, value);
         }
      }

      return null;
   }
}

Now, with FilterField we may create a concrete implementation of filters consisting of several simple FilterField. The example of such implementation is EmployeeFilter class that has three possible criterias of filtering by salary, age and position.

@Data
public class EmployeeFilter {
   private FilterField salary;
   private FilterField age;
   private FilterField position;
}

Now if you would like to use that filter in your GraphQL query you should create something like that. In that query we are searching for all developers that has a salary greater than 12000 and age greater than 30 years.

{
  employeesWithFilter(filter: {
    salary: {
      operator: "gt"
      value: "12000"
    },
    age: {
      operator: "gt"
      value: "30"
    },
    position: {
      operator: "eq",
      value: "Developer"
    }
  }) {
    id
    firstName
    lastName
    position
  }
}

Let’s take a look at the implementation of query resolver. The same as for fetching relations we are using JPA Criteria API and Specification class. I have three methods that creates Specification for each of possible filter fields. Then I’m building dynamically filtering criterias based on the content of EmployeeFilter.

@Component
public class EmployeeQueryResolver implements GraphQLQueryResolver {

   private EmployeeRepository repository;

   EmployeeQueryResolver(EmployeeRepository repository) {
      this.repository = repository;
   }

   // OTHER FIND METHODS ...
   
   public Iterable<Employee&qt; employeesWithFilter(EmployeeFilter filter) {
      Specification<Employee&qt; spec = null;
      if (filter.getSalary() != null)
         spec = bySalary(filter.getSalary());
      if (filter.getAge() != null)
         spec = (spec == null ? byAge(filter.getAge()) : spec.and(byAge(filter.getAge())));
      if (filter.getPosition() != null)
         spec = (spec == null ? byPosition(filter.getPosition()) :
               spec.and(byPosition(filter.getPosition())));
      if (spec != null)
         return repository.findAll(spec);
      else
         return repository.findAll();
   }

   private Specification<Employee&qt; bySalary(FilterField filterField) {
      return (Specification<Employee&qt;) (root, query, builder) -&qt; filterField.generateCriteria(builder, root.get("salary"));
   }

   private Specification<Employee&qt; byAge(FilterField filterField) {
      return (Specification<Employee&qt;) (root, query, builder) -&qt; filterField.generateCriteria(builder, root.get("age"));
   }

   private Specification<Employee&qt; byPosition(FilterField filterField) {
      return (Specification<Employee&qt;) (root, query, builder) -&qt; filterField.generateCriteria(builder, root.get("position"));
   }
}

6. Testing Spring Boot GraphQL JPA support

We will insert some test data into the H2 database by defining data.sql inside src/main/resources directory.

insert into organization (id, name) values (1, 'Test1');
insert into organization (id, name) values (2, 'Test2');
insert into organization (id, name) values (3, 'Test3');
insert into organization (id, name) values (4, 'Test4');
insert into organization (id, name) values (5, 'Test5');
insert into department (id, name, organization_id) values (1, 'Test1', 1);
insert into department (id, name, organization_id) values (2, 'Test2', 1);
insert into department (id, name, organization_id) values (3, 'Test3', 1);
insert into department (id, name, organization_id) values (4, 'Test4', 2);
insert into department (id, name, organization_id) values (5, 'Test5', 2);
insert into department (id, name, organization_id) values (6, 'Test6', 3);
insert into department (id, name, organization_id) values (7, 'Test7', 4);
insert into department (id, name, organization_id) values (8, 'Test8', 5);
insert into department (id, name, organization_id) values (9, 'Test9', 5);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (1, 'John', 'Smith', 'Developer', 10000, 30, 1, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (2, 'Adam', 'Hamilton', 'Developer', 12000, 35, 1, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (3, 'Tracy', 'Smith', 'Architect', 15000, 40, 1, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (4, 'Lucy', 'Kim', 'Developer', 13000, 25, 2, 1);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (5, 'Peter', 'Wright', 'Director', 50000, 50, 4, 2);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (6, 'Alan', 'Murray', 'Developer', 20000, 37, 4, 2);
insert into employee (id, first_name, last_name, position, salary, age, department_id, organization_id) values (7, 'Pamela', 'Anderson', 'Analyst', 7000, 27, 4, 2);

Now, we can easily perform some test queries by using GraphiQL that is embedded into our application and available under address http://localhost:8080/graphiql after startup. First, let’s verify the filtering query.

graphql-spring-boot-query-1

Now, we may test fetching by searching Department by id and fetching a list of employees and organization.

graphql-spring-boot-query-2

The post An Advanced Guide to GraphQL with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/07/31/an-advanced-guide-to-graphql-with-spring-boot/feed/ 6 8220
Distributed Transactions in Microservices with Spring Boot https://piotrminkowski.com/2020/06/19/distributed-transactions-in-microservices-with-spring-boot/ https://piotrminkowski.com/2020/06/19/distributed-transactions-in-microservices-with-spring-boot/#comments Fri, 19 Jun 2020 10:13:34 +0000 http://piotrminkowski.com/?p=8144 When I’m talking about microservices with other people they are often asking me about an approach to distributed transactions. My advice is always the same – try to completely avoid distributed transactions in your microservices architecture. It is a very complex process with a lot of moving parts that can fail. That’s why it does […]

The post Distributed Transactions in Microservices with Spring Boot appeared first on Piotr's TechBlog.

]]>
When I’m talking about microservices with other people they are often asking me about an approach to distributed transactions. My advice is always the same – try to completely avoid distributed transactions in your microservices architecture. It is a very complex process with a lot of moving parts that can fail. That’s why it does not fit the nature of microservices-based systems.

However, if for any reason you require to use distributed transactions, there are two popular approaches for that: Two Phase Commit Protocol and Eventual Consistency and Compensation also known as Saga pattern. You can read some interesting articles about it online. Most of them are discussing theoretical aspects related two those approaches, so in this article, I’m going to present the sample implementation in Spring Boot. It is worth mentioning that there are some ready implementations of Saga pattern like support for complex business transactions provided by Axon Framework. The documentation of this solution is available here: https://docs.axoniq.io/reference-guide/implementing-domain-logic/complex-business-transactions.

Example

The source code with sample applications is as usual available on GitHub in the repository: https://github.com/piomin/sample-spring-microservices-transactions.git.

Architecture

First, we need to add a new component to our system. It is responsible just for managing distributed transactions across microservices. That element is described as transaction-server on the diagram below. We also use another popular component in microservices-based architecture discovery-server. There are three applications: order-service, account-service and product-service. The application order-service is communicating with account-service and product-service. All these applications are using Postgres database as a backend store. Just for simplification I have run a single database with multiple tables. In a normal situation we would have a single database per each microservice.

spring-microservice-transactions-arch1

Now, we will consider the following situation (it is visualized on the diagram below). The application order-service is creating an order, storing it in the database, and then starting a new distributed transaction (1). After that, it is communicating with application product-service to update the current number of stored products and get their price (2). At the same time product-service is sending information to transaction-server that it is participating in the transaction (3). Then order-service is trying to withdraw the required funds from the customer account and transfer them into another account related to a seller (4). Finally, we are rolling back the transaction by throwing an exception inside the transaction method from order-service (6). This rollback should cause a rollback of the whole distributed transaction.

spring-microservices-transactions-arch2 (1)

Building transaction server

We are starting implementation from transaction-server. A transaction server is responsible for managing distributed transactions across all microservices in our sample system. It exposes REST API available for all other microservices for adding new transactions and updating their status. It also sends asynchronous broadcast events after receiving transaction confirmation or rollback from a source microservice. It uses RabbitMQ message broker for sending events to other microservices via topic exchange. All other microservices are listening for incoming events, and after receiving them they are committing or rolling back transactions. We can avoid using a message broker for exchanging events and use communication over HTTP endpoints, but that makes sense only if we have a single instance of every microservice. Here’s the picture that illustrates the currently described architecture.

spring-microservice-transactions-server (1)

Let’s take a look on the list of required dependencies. It would be pretty the same for other applications. We need spring-boot-starter-amqp for integration with RabbitMQ, spring-boot-starter-web for exposing REST API over HTTP, spring-cloud-starter-netflix-eureka-client for integration with Eureka discovery server and some basic Kotlin libraries.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.module</groupId>
   <artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

In the main class we are defining a topic exchange for events sent to microservices. The name of exchange is trx-events, and it is automatically created on RabbitMQ after application startup.

@SpringBootApplication
class TransactionServerApp {

    @Bean
    fun topic(): TopicExchange = TopicExchange("trx-events")
}

fun main(args: Array) {
    runApplication(*args)
}

Here are domain model classes used by a transaction server. The same classes are used by the microservices during communication with transaction-server.

data class DistributedTransaction(var id: String? = null,var status: DistributedTransactionStatus,
                                  val participants: MutableList<DistributedTransactionParticipant> = mutableListOf())
                          
class DistributedTransactionParticipant(val serviceId: String, var status: DistributedTransactionStatus)

enum class DistributedTransactionStatus {
    NEW, CONFIRMED, ROLLBACK, TO_ROLLBACK
}   

Here’s the controller class. It is using a simple in-memory implementation of repository and RabbitTemplate for sending events to RabbitMQ. The HTTP API provides methods for adding new transaction, finishing existing transaction with a given status (CONFIRM or ROLLBACK), searching transaction by id and adding participants (new services) into a transaction.

@RestController
@RequestMapping("/transactions")
class TransactionController(val repository: TransactionRepository,
                            val template: RabbitTemplate) {

    @PostMapping
    fun add(@RequestBody transaction: DistributedTransaction): DistributedTransaction =
            repository.save(transaction)

    @GetMapping("/{id}")
    fun findById(@PathVariable id: String): DistributedTransaction? = repository.findById(id)

    @PutMapping("/{id}/finish/{status}")
    fun finish(@PathVariable id: String, @PathVariable status: DistributedTransactionStatus) {
        val transaction: DistributedTransaction? = repository.findById(id)
        if (transaction != null) {
            transaction.status = status
            repository.update(transaction)
            template.convertAndSend("trx-events", DistributedTransaction(id, status))
        }
    }

    @PutMapping("/{id}/participants")
    fun addParticipant(@PathVariable id: String,
                       @RequestBody participant: DistributedTransactionParticipant) =
            repository.findById(id)?.participants?.add(participant)

    @PutMapping("/{id}/participants/{serviceId}/status/{status}")
    fun updateParticipant(@PathVariable id: String,
                          @PathVariable serviceId: String,
                          @PathVariable status: DistributedTransactionStatus) {
        val transaction: DistributedTransaction? = repository.findById(id)
        if (transaction != null) {
            val index = transaction.participants.indexOfFirst { it.serviceId == serviceId }
            if (index != -1) {
                transaction.participants[index].status = status
                template.convertAndSend("trx-events", DistributedTransaction(id, status))
            }
        }
    }

}   

Handling transactions in downstream services

Let’s analyze how our microservices are handling transactions on the example of account. Here’s the implementation of AccountService that is called by the controller for transfering funds from/to account. All methods here are @Transactional and here we need an attention – @Async. It means that each method is running in a new thread and is processing asynchronously. Why? That’s a key concept here. We will block the transaction in order to wait for confirmation from transaction-server, but the main thread used by the controller will not be blocked. It returns the response with the current state of Account immediately.

@Service
@Transactional
@Async
class AccountService(val repository: AccountRepository,
                     var applicationEventPublisher: ApplicationEventPublisher) {
    
    fun payment(id: Int, amount: Int, transactionId: String) =
            transfer(id, amount, transactionId)

    fun withdrawal(id: Int, amount: Int, transactionId: String) =
            transfer(id, (-1) * amount, transactionId)

    private fun transfer(id: Int, amount: Int, transactionId: String) {
        val accountOpt: Optional<Account> = repository.findById(id)
        if (accountOpt.isPresent) {
            val account: Account = accountOpt.get()
            account.balance += amount
            applicationEventPublisher.publishEvent(AccountTransactionEvent(transactionId, account))
            repository.save(account)
        }
    }

}

Here’s the implementation of @Controller class. As you see it is calling methods from AccountService, that are being processed asynchronously. The returned Account object is taken from EventBus bean. This bean is responsible for exchanging asynchronous events within the application scope. En event is sent by the AccountTransactionListener bean responsible for handling Spring transaction events.

@RestController
@RequestMapping("/accounts")
class AccountController(val repository: AccountRepository,
                        val service: AccountService,
                        val eventBus: EventBus) {

    @PostMapping
    fun add(@RequestBody account: Account): Account = repository.save(account)

    @GetMapping("/customer/{customerId}")
    fun findByCustomerId(@PathVariable customerId: Int): List<Account> =
            repository.findByCustomerId(customerId)

    @PutMapping("/{id}/payment/{amount}")
    fun payment(@PathVariable id: Int, @PathVariable amount: Int,
                @RequestHeader("X-Transaction-ID") transactionId: String): Account {
        service.payment(id, amount, transactionId)
        return eventBus.receiveEvent(transactionId)!!.account
    }

    @PutMapping("/{id}/withdrawal/{amount}")
    fun withdrawal(@PathVariable id: Int, @PathVariable amount: Int,
                   @RequestHeader("X-Transaction-ID") transactionId: String): Account {
        service.withdrawal(id, amount, transactionId)
        return eventBus.receiveEvent(transactionId)!!.account
    }

}

The event object exchanged between bean is very simple. It contains an id of transaction and the current Account object.


class AccountTransactionEvent(val transactionId: String, val account: Account)

Finally, let’s take a look at the implementation of AccountTransactionListener bean responsible for handling transactional events. We are using Spring @TransactionalEventListener for annotating methods that should handle incoming events. There are 4 possible event types to handle: BEFORE_COMMIT, AFTER_COMMIT, AFTER_ROLLBACK and AFTER_COMPLETION. There is one very important thing in @TransactionalEventListener, which may be not very intuitive. It is being processed in the same thread as the transaction. So if you would do something that should not block the thread with transaction you should annotate it with @Async. However, in our case this behaviour is required, since we need to block a transactional thread until we receive a confirmation or rollback from transaction-server for a given transaction. These events are sent by transaction-server through RabbitMQ, and they are also exchanged between beans using EventBus. If the status of the received event is different than CONFIRMED we are throwing the exception to rollback transaction.
The AccountTransactionListener is also listening on AFTER_ROLLBACK and AFTER_COMPLETION. After receiving such an event type it is changing the status of the transaction by calling endpoint exposed by transaction-server.

@Component
class AccountTransactionListener(val restTemplate: RestTemplate,
                                 val eventBus: EventBus) {

    @TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
    @Throws(AccountProcessingException::class)
    fun handleEvent(event: AccountTransactionEvent) {
        eventBus.sendEvent(event)
        var transaction: DistributedTransaction? = null
        for (x in 0..100) {
            transaction = eventBus.receiveTransaction(event.transactionId)
            if (transaction == null)
                Thread.sleep(100)
            else break
        }
        if (transaction == null || transaction.status != DistributedTransactionStatus.CONFIRMED)
            throw AccountProcessingException()
    }

    @TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
    fun handleAfterRollback(event: AccountTransactionEvent) {
        restTemplate.put("http://transaction-server/transactions/transactionId/participants/{serviceId}/status/{status}",
                null, "account-service", "TO_ROLLBACK")
    }

    @TransactionalEventListener(phase = TransactionPhase.AFTER_COMPLETION)
    fun handleAfterCompletion(event: AccountTransactionEvent) {
        restTemplate.put("http://transaction-server/transactions/transactionId/participants/{serviceId}/status/{status}",
                null, "account-service", "CONFIRM")
    }
    
}

Here’s the implementation of the bean responsible for receiving asynchronous events from a message broker. As you see after receiving such an event it is using EventBus to forward that event to other beans.

@Component
class DistributedTransactionEventListener(val eventBus: EventBus) {

    @RabbitListener(bindings = [
        QueueBinding(exchange = Exchange(type = ExchangeTypes.TOPIC, name = "trx-events"),
                value = Queue("trx-events-account"))
    ])
    fun onMessage(transaction: DistributedTransaction) {
        eventBus.sendTransaction(transaction)
    }

}

Integration with database

Of course our application is using Postgres as a backend store, so we need to provide integration. In fact, that is the simplest step of our implementation. First we need to add the following 2 dependencies. We will use Spring Data JPA for integration with Postgres.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <scope>runtime</scope>
</dependency>

Our entity is very simple. Besides the id field it contains two fields: customerId and balance.


@Entity
data class Account(@Id @GeneratedValue(strategy = GenerationType.AUTO) val id: Int,
                   val customerId: Int,
                   var balance: Int)

We are using the well-known Spring Data repository pattern.

interface AccountRepository: CrudRepository<Account, Int> {

    fun findByCustomerId(id: Int): List<Account>

}

Here’s the suggested list of configuration settings.

spring:
  application:
    name: account-service
  datasource:
    url: jdbc:postgresql://postgresql:5432/trx
    username: trx
    password: trx
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
    hibernate:
      ddl-auto: create
    show-sql: true
    properties:
      hibernate:
        format_sql: true
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Building order-service

Ok, we have already finished the implementation of transaction-server, and two microservices account-service and product-service. Since the implementation of product-service is very similar to account-service, I have explained everything on the example of account-service. Now, we may proceed to the last part – the implementation of order-service. It is responsible for starting a new transaction and marking it as finished. It also may finish it with rollback.Of course, rollback events may be sent by another two applications as well.
The implementation of @Controller class is visible below. I’ll describe it step by step. We are starting a new distributed transaction by calling POST /transactions endpoint exposed by transaction-server (1). Then we are storing a new order in database (2). When we are calling a transactional method from downstream service we need to set HTTP header X-Transaction-ID. The first transactional method that is called here is PUT /products/{id}/count/{count}(3). It updates the number of products in the store and calculates a final price (4). In the step it is calling another transaction method – this time from account-service (5). It is responsible for withdrawing money from customer accounts. We are enabling Spring transaction events processing (6). In the last step we are generating a random number, and then basing on its value application is throwing an exception to rollback transaction (7).

@RestController
@RequestMapping("/orders")
class OrderController(val repository: OrderRepository,
                      val restTemplate: RestTemplate,
          var applicationEventPublisher: ApplicationEventPublisher) {

    @PostMapping
    @Transactional
    @Throws(OrderProcessingException::class)
    fun addAndRollback(@RequestBody order: Order) {
        var transaction  = restTemplate.postForObject("http://transaction-server/transactions",
                DistributedTransaction(), DistributedTransaction::class.java) // (1)
        val orderSaved = repository.save(order) // (2)
        val product = updateProduct(transaction!!.id!!, order) // (3)
        val totalPrice = product.price * product.count // (4)
        val accounts = restTemplate.getForObject("http://account-service/accounts/customer/{customerId}",
                Array<Account>::class.java, order.customerId)
        val account  = accounts!!.first { it.balance >= totalPrice}
        updateAccount(transaction.id!!, account.id, totalPrice) // (5)
        applicationEventPublisher.publishEvent(OrderTransactionEvent(transaction.id!!)) // (6)
        val r = Random.nextInt(100) // (7)
        if (r % 2 == 0)
            throw OrderProcessingException()
    }

    fun updateProduct(transactionId: String, order: Order): Product {
        val headers = HttpHeaders()
        headers.set("X-Transaction-ID", transactionId)
        val entity: HttpEntity<*> = HttpEntity<Any?>(headers)
        val product = restTemplate.exchange("http://product-service/products/{id}/count/{count}",
                HttpMethod.PUT, null, Product::class.java, order.id, order.count)
        return product.body!!
    }

    fun updateAccount(transactionId: String, accountId: Int, totalPrice: Int): Account {
        val headers = HttpHeaders()
        headers.set("X-Transaction-ID", transactionId)
        val entity: HttpEntity<*> = HttpEntity<Any?>(headers)
        val account = restTemplate.exchange("http://account-service/accounts/{id}/withdrawal/{amount}",
                HttpMethod.PUT, null, Account::class.java, accountId, totalPrice)
        return account.body!!
    }
}

Conclusion

Even a trivial implementation of distributed transactions in microservices, like the one, demonstrated in this article, can be complicated. As you see we need to add a new element to our architecture, transaction-server, responsible only for distributed transaction management. We also have to add a message broker in order to exchange events between our applications and transaction-server. However, many of you were asking me about distributed transactions in the microservices world, so I decided to build that simple demo. I’m waiting for your feedback and opinions.

The post Distributed Transactions in Microservices with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/19/distributed-transactions-in-microservices-with-spring-boot/feed/ 19 8144
JPA Data Access with Micronaut Data https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/ https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/#respond Thu, 25 Jul 2019 11:58:39 +0000 https://piotrminkowski.wordpress.com/?p=7196 When I have been writing some articles comparing Spring and Micronaut frameworks recently, I have taken note of many comments about the lack of built-in ORM and data repositories supported by Micronaut. Spring provides this feature for a long time through the Spring Data project. The good news is that the Micronaut team is close […]

The post JPA Data Access with Micronaut Data appeared first on Piotr's TechBlog.

]]>
When I have been writing some articles comparing Spring and Micronaut frameworks recently, I have taken note of many comments about the lack of built-in ORM and data repositories supported by Micronaut. Spring provides this feature for a long time through the Spring Data project. The good news is that the Micronaut team is close to complete work on the first version of their project with ORM support. The project called Micronaut Data (old Micronaut Predator) (short for Precomputed Data Repositories) is still under active development, and currently we may access just the snapshot version. However, the authors are introducing it as more efficient with reduced memory consumption than competitive solutions like Spring Data or Grails GORM. In short, this could be achieved thanks to Ahead of Time (AoT) compilation to pre-compute queries for repository interfaces that are then executed by a thin, lightweight runtime layer, and avoiding usage of reflection or runtime proxies.

Currently, Micronaut Predator provides runtime support for JPA (Hibernate) and SQL (JDBC). Some other implementations are planned in the future. In this article I’m going to show you how to include Micronaut Data in your application and use its main features for providing JPA data access.

1. Dependencies

The snapshot dependency of Micronaut Predator is available at https://oss.sonatype.org/content/repositories/snapshots/, so first we need to include it to the repository list in our pom.xml together with jcenter:

<repositories>
   <repository>
      <id>jcenter.bintray.com</id>
      <url>https://jcenter.bintray.com</url>
   </repository>
   <repository>
      <id>sonatype-snapshots</id>
      <url>https://oss.sonatype.org/content/repositories/snapshots/</url>
   </repository>
</repositories>

In addition to the standard libraries included for building a web application with Micronaut, we have to add the following dependencies: database driver (we will use PostgreSQL as the database for our sample application) and micronaut-predator-hibernate-jpa.

<dependency>
   <groupId>io.micronaut.data</groupId>
   <artifactId>micronaut-predator-hibernate-jpa</artifactId>
   <version>${predator.version}</version>
   <scope>compile</scope>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-jdbc-tomcat</artifactId>
   <scope>runtime</scope>
</dependency>    
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.2.6</version>
</dependency> 

Some Micronaut libraries including micronaut-predator-processor have to be added to the annotation processor path. Such a configuration should be provided inside Maven Compiler Plugin configuration:

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.7.0</version>
   <configuration>
      <source>${jdk.version}</source>
      <target>${jdk.version}</target>
      <encoding>UTF-8</encoding>
      <compilerArgs>
         <arg>-parameters</arg>
      </compilerArgs>
      <annotationProcessorPaths>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-inject-java</artifactId>
            <version>${micronaut.version}</version>
         </path>
         <path>
            <groupId>io.micronaut.data</groupId>
            <artifactId>micronaut-predator-processor</artifactId>
            <version>${predator.version}</version>
         </path>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-validation</artifactId>
            <version>${micronaut.version}</version>
         </path>
      </annotationProcessorPaths>
   </configuration>
</plugin>

The current newest RC version of Micronaut is 1.2.0.RC:

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.micronaut</groupId>
         <artifactId>micronaut-bom</artifactId>
         <version>1.2.0.RC2</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

2. Domain Model

Our database model consists of four tables as shown below. The same database model has been used for some of my previous examples including those for Spring Data usage. We have employee table. Each employee is assigned to the exactly one department and one organization. Each department is assigned to exactly one organization. There is also table employment, which provides a history of employment for every single employee.

micronaut-data-jpa-1

Here is the implementation of entity classes corresponding to the database model. Let’s start from Employee class:

@Entity
public class Employee {

   @Id
   @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "employee_id_seq")
   @SequenceGenerator(name = "employee_id_seq", sequenceName = "employee_id_seq", allocationSize = 1)
   private Long id;
   private String name;
   private int age;
   private String position;
   private int salary;
   @ManyToOne
   private Organization organization;
   @ManyToOne
   private Department department;
   @OneToMany
   private Set<Employment> employments;
   
   // ... GETTERS AND SETTERS
}

Here’s the implementation of Department class:


@Entity
public class Department {

   @Id
   @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq")
   @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq", allocationSize = 1)
   private Long id;
   private String name;
   @OneToMany
   private Set<Employee> employees;
   @ManyToOne
   private Organization organization;
   
   // ... GETTERS AND SETTERS
}

And here’s Organization entity:


@Entity
public class Organization {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "organization_id_seq")
    @SequenceGenerator(name = "organization_id_seq", sequenceName = "organization_id_seq", allocationSize = 1)
    private Long id;
    private String name;
    private String address;
    @OneToMany
    private Set<Department> departments;
    @OneToMany
    private Set<Employee> employees;
   
   // ... GETTERS AND SETTERS
}

And the last entity Employment:

@Entity
public class Employment {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "employment_id_seq")
    @SequenceGenerator(name = "employment_id_seq", sequenceName = "employment_id_seq", allocationSize = 1)
    private Long id;
    @ManyToOne
    private Employee employee;
    @ManyToOne
    private Organization organization;
    @Temporal(TemporalType.DATE)
    private Date start;
    @Temporal(TemporalType.DATE)
    private Date end;
   
   // ... GETTERS AND SETTERS
}

3. Creating JPA repositories with Micronaut Data

If you are familiar with the Spring Data repositories pattern, you won’t have any problems when using Micronaut repositories. The approach to declaring repositories and building queries is the same as in Spring Data. You need to declare an interface (or an abstract class) annotated with @Repository that extends interface CrudRepository. CrudRepository is not the only one interface that can be extended. You can also use GenericRepository, AsyncCrudRepository for asynchronous operations, ReactiveStreamsCrudRepository for reactive CRUD execution or PageableRepository that adds methods for pagination. The typical repository declaration looks like as shown below.

@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    Set<EmployeeDTO> findBySalaryGreaterThan(int salary);

    Set<EmployeeDTO> findByOrganization(Organization organization);

    int findAvgSalaryByAge(int age);

    int findAvgSalaryByOrganization(Organization organization);

}

I have declared there some additional find methods. The most common query prefix is found, but you can also use search, query, get, read, or retrieve. The first two queries return all employees with a salary greater than a given value and all employees assigned to a given organization. The Employee entity is in many-to-one relation with Organization, so we may also use relational fields as query parameters. It is noteworthy that both two queries return DTO objects as a result inside the collection. That’s possible because Micronaut Predator supports reflection-free Data Transfer Object (DTO) projections if the return type is annotated with @Introspected. Here’s the declaration of EmployeeDTO.

@Introspected
public class EmployeeDTO {

    private String name;
    private int age;
    private String position;
    private int salary;
   
    // ... GETTERS AND SETTERS
}

The EmployeeRepository contains two other methods using aggregation expressions. Method findAvgSalaryByAge counts average salary by a given age of employees, while findAvgSalaryByOrganization counts avarage salary by a given organization.
For comparison, let’s take a look on another repository implementation EmploymentRepository. We need two additional find methods. First findByEmployeeOrderByStartDesc for searching employment history for a given employee ordered by start date. The second method finds employment without an end date set, which in fact means that’s the employment for a current job.

@Repository
public interface EmploymentRepository extends CrudRepository<Employment, Long> {

    Set<EmploymentDTO> findByEmployeeOrderByStartDesc(Employee employee);

    Employment findByEmployeeAndEndIsNull(Employee employee);

}

Micronaut Predator is able to automatically manage transactions. You just need to annotate your method with @Transactional. In the source code fragment visible below you may see the method used for changing a job by an employee. We are performing a bunch of save operations inside that method. First, we change the target department and organization for a given employee, then we are creating new employment history record for a new job, and also setting end date for the previous employment entity (found using repository method findByEmployeeAndEndIsNull).

@Inject
DepartmentRepository departmentRepository;
@Inject
EmployeeRepository employeeRepository;
@Inject
EmploymentRepository employmentRepository;

@Transactional
public void changeJob(Long employeeId, Long targetDepartmentId) {
   Optional<Employee> employee = employeeRepository.findById(employeeId);
   employee.ifPresent(employee1 -> {
      Optional<Department> department = departmentRepository.findById(targetDepartmentId);
      department.ifPresent(department1 -> {
         employee1.setDepartment(department1);
         employee1.setOrganization(department1.getOrganization());
         Employment employment = new Employment(employee1, department1.getOrganization(), new Date());
         employmentRepository.save(employment);
         Employment previousEmployment = employmentRepository.findByEmployeeAndEndIsNull(employee1);
         previousEmployment.setEnd(new Date());
         employmentRepository.save(previousEmployment);
      });
   });
}

Ok, now let’s move on to the last repository implementation discussed in this section – OrganizationRepository. Since Organization entity is in lazy load one-to-many relation with Employee and Department, we need to fetch data to present dependencies in the output. To achieve that we can use @Join annotation on the repository interface with specifying JOIN FETCH. Since the @Join annotation is repeatable it can be specified multiple times for different associations as shown below.

@Repository
public interface OrganizationRepository extends CrudRepository<Organization, Long> {

    @Join(value = "departments", type = Join.Type.LEFT_FETCH)
    @Join(value = "employees", type = Join.Type.LEFT_FETCH)
    Optional<Organization> findByName(String name);

}

4. Batch operations

Micronaut Predator repositories support batch operations. It can be sometimes useful, for example in automatic tests. Here’s my simple JUnit test that adds multiple employees into a single department inside an organization:

@Test
public void addMultiple() {
   List<Employee> employees = Arrays.asList(
      new Employee("Test1", 20, "Developer", 5000),
      new Employee("Test2", 30, "Analyst", 15000),
      new Employee("Test3", 40, "Manager", 25000),
      new Employee("Test4", 25, "Developer", 9000),
      new Employee("Test5", 23, "Analyst", 8000),
      new Employee("Test6", 50, "Developer", 12000),
      new Employee("Test7", 55, "Architect", 25000),
      new Employee("Test8", 43, "Manager", 15000)
   );

   Organization organization = new Organization("TestWithEmployees", "TestAddress");
   Organization organizationSaved = organizationRepository.save(organization);
   Assertions.assertNotNull(organization.getId());
   Department department = new Department("TestWithEmployees");
   department.setOrganization(organization);
   Department departmentSaved = departmentRepository.save(department);
   Assertions.assertNotNull(department.getId());
   employeeRepository.saveAll(employees.stream().map(employee -> {
      employee.setOrganization(organizationSaved);
      employee.setDepartment(departmentSaved);
      return employee;
   }).collect(Collectors.toList()));
}

5. Controllers

Finally, the last implementation step – building REST controllers. OrganizationController is pretty simple. It injects OrganizationRepository and uses it for saving entities and searching their by name. Here’s the implementation:

@Controller("organizations")
public class OrganizationController {

    @Inject
    OrganizationRepository repository;

    @Post("/organization")
    public Long addOrganization(@Body Organization organization) {
        Organization organization1 = repository.save(organization);
        return organization1.getId();
    }

    @Get("/organization/name/{name}")
    public Optional<Organization> findOrganization(@NotNull String name) {
        return repository.findByName(name);
    }

}

EmployeeController is a little bit more complicated. We have an implementation that exposes four additional find methods defined in EmployeeRepository. There is also a method for adding a new employee and assigning it to the department, and changing the job implemented inside SampleService bean.

@Controller("employees")
public class EmployeeController {

    @Inject
    EmployeeRepository repository;
    @Inject
    OrganizationRepository organizationRepository;
    @Inject
    SampleService service;

    @Get("/salary/{salary}")
    public Set<EmployeeDTO> findEmployeesBySalary(int salary) {
        return repository.findBySalaryGreaterThan(salary);
    }

    @Get("/organization/{organizationId}")
    public Set<EmployeeDTO> findEmployeesByOrganization(Long organizationId) {
        Optional<Organization> organization = organizationRepository.findById(organizationId);
        return repository.findByOrganization(organization.get());
    }

    @Get("/salary-avg/age/{age}")
    public int findAvgSalaryByAge(int age) {
        return repository.findAvgSalaryByAge(age);
    }

    @Get("/salary-avg/organization/{organizationId}")
    public int findAvgSalaryByAge(Long organizationId) {
        Optional<Organization> organization = organizationRepository.findById(organizationId);
        return repository.findAvgSalaryByOrganization(organization.get());
    }

    @Post("/{departmentId}")
    public void addNewEmployee(@Body Employee employee, Long departmentId) {
        service.hireEmployee(employee, departmentId);
    }

    @Put("/change-job")
    public void changeJob(@Body ChangeJobRequest request) {
        service.changeJob(request.getEmployeeId(), request.getTargetOrganizationId());
    }

}

6. Configuring database connection

As usual we use Docker image for running database instances locally. Here’s the command that runs container with Postgres and expose it on port 5432:

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=predator -e POSTGRES_PASSWORD=predator123 -e POSTGRES_DB=predator postgres

After startup my Postgres instance is available on the virtual address 192.168.99.100, so I have to set it in the Micronaut application.yml. Besides database connection settings we will also set some JPA properties, that enable SQL logging and automatically applies model changes into a database schema. Here’s full configuration of our sample application inside application.yml:

micronaut:
  application:
    name: sample-micronaut-jpa

jackson:
  bean-introspection-module: true

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/predator?ssl=false
    driverClassName: org.postgresql.Driver
    username: predator
    password: predator123

jpa:
  default:
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Conclusion

The support for ORM was one of the most expected features for the Micronaut Framework. Not only it will be available in the release version soon, but it is almost 1.5x faster than Spring Data JPA – following this article https://objectcomputing.com/news/2019/07/18/unleashing-predator-precomputed-data-repositories created by the leader of Micronaut Project Graeme Rocher. In my opinion, the support for ORM via project Predator may be the reason that developers decide to use Micronaut instead of Spring Boot.
In this article, I have demonstrated the most interesting features of Micronaut Data JPA. I think that it will be continuously improved, and we see some new useful features soon. The sample application source code snippet is, as usual, available on GitHub: https://github.com/piomin/sample-micronaut-jpa.git. Before starting with Micronaut Data it is worth reading about the basics: Micronaut Tutorial: Beans and scopes.

The post JPA Data Access with Micronaut Data appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/feed/ 0 7196