Serverless Archives - Piotr's TechBlog https://piotrminkowski.com/tag/serverless/ Java, Spring, Kotlin, microservices, Kubernetes, containers Wed, 31 Jan 2024 09:15:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Serverless Archives - Piotr's TechBlog https://piotrminkowski.com/tag/serverless/ 32 32 181738725 Serverless on Azure Function with Quarkus https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/ https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/#comments Wed, 31 Jan 2024 08:57:11 +0000 https://piotrminkowski.com/?p=14865 This article will teach you how to create and run serverless apps on Azure Function using the Quarkus Funqy extension. You can compare it to the Spring Boot and Spring Cloud support for Azure functions described in my previous article. There are also several other articles about Quarkus on my blog. If you are interested […]

The post Serverless on Azure Function with Quarkus appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and run serverless apps on Azure Function using the Quarkus Funqy extension. You can compare it to the Spring Boot and Spring Cloud support for Azure functions described in my previous article. There are also several other articles about Quarkus on my blog. If you are interested in the Kubernetes native solutions you can read more about serverless functions on OpenShift here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Quarkus app used in the article is located in the account-function directory. After you go to that directory you should just follow my further instructions.

Prerequisites

There are some prerequisites before you start the exercise. You need to install JDK17+ and Maven on your local machine. You also need to have an account on Azure and az CLI to interact with that account. Once you install the az CLI and log in to Azure you can execute the following command for verification:

$ az account show

If you would like to test Azure Functions locally, you need to install Azure Functions Core Tools. You can find detailed installation instructions in Microsoft Docs here. For macOS, there are three required commands to run:

$ brew tap azure/functions
$ brew install azure-functions-core-tools@4
$ brew link --overwrite azure-functions-core-tools@4

Create Resources on Azure

Before proceeding with the source code, we must create several required resources on the Azure cloud. In the first step, we will prepare a resource group for all the required objects. The name of the group is quarkus-serverless. The location depends on your preferences. For me it is eastus.

$ az group create -l eastus -n quarkus-serverless

In the next step, we need to create a storage account. The Azure Function service requires it, but we will also use that account during the local development with Azure Functions Core Tools.

$ az storage account create -n pminkowsserverless \
     -g quarkus-serverless \
     -l eastus \
     --sku Standard_LRS

In order to run serverless apps with the Quarkus Azure extension, we need to create the Azure Function App instances. Of course, we use the previously created resource group and storage account. The name of my Function App instance is pminkows-account-function. We can also set a default OS type (Linux), functions version (4), and a runtime stack (Java) for each Function App.

$ az functionapp create -n pminkows-account-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g quarkus-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

Now, let’s switch to the Azure Portal. Then, find the quarkus-serverless resource group. You should have the same list of resources inside this group as shown below. It means that our environment is ready and we can proceed to the app implementation.

quarkus-azure-function-resources

Building Serverless Apps with Quarkus Funqy HTTP

In this article, we will consider the simplest option for building and running Quarkus apps on Azure Functions. Therefore, we include the Quarkus Funqy HTTP extension. It provides a simple way to expose services as HTTP endpoints, but shouldn’t be treated as a replacement for REST over HTTP. In case you need the full REST functionality you can use, e.g. the Quarkus RESTEasy module with Azure Function Java library. In order to deploy the app on the Azure Function service, we need to include the quarkus-azure-functions-http extension. Our function will also store data in the H2 in-memory database through the Panache module integration. Here’s a list of required Maven dependencies:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-funqy-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-azure-functions-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

With the quarkus-azure-functions extension we don’t need to include and configure any Maven plugin to deploy an app on Azure. That extension will do the whole deployment work for us. By default, Quarkus uses the Azure CLI in the background to authenticate and deploy to Azure. We just need to provide several configuration properties with the quarkus.azure-functions prefix inside the Quarkus application.properties file. In the configuration section, we have to set the name of the Azure Function App instance (pminkows-account-function), the target resource group (quarkus-serverless), the region (eastus), the service plan (EastUSLinuxDynamicPlan). We will also add several properties responsible for database connection and setting the root API context path (/api).

quarkus.azure-functions.app-name = pminkows-account-function
quarkus.azure-functions.app-service-plan-name = EastUSLinuxDynamicPlan
quarkus.azure-functions.resource-group = quarkus-serverless
quarkus.azure-functions.region = eastus
quarkus.azure-functions.runtime.java-version = 17

quarkus.datasource.db-kind = h2
quarkus.datasource.username = sa
quarkus.datasource.password = password
quarkus.datasource.jdbc.url = jdbc:h2:mem:testdb
quarkus.hibernate-orm.database.generation = drop-and-create

quarkus.http.root-path = /api

Here’s our @Entity class. We take advantage of the Quarkus Panache active record pattern.

@Entity
public class Account extends PanacheEntity {
    public String number;
    public int balance;
    public Long customerId;
}

Let’s take a look at the implementation of our Quarkus HTTP functions. By default, with the Quarkus Funqy extension, the URL path to execute a function is the function name. We just need to annotate the target method with @Funq. In case we want to override a default path, we put the request name as the annotation value field. There are two methods. The addAccount method is responsible for adding new accounts and is exposed under the add-account path. On the other hand, the findByNumber method allows us to find the account by its number. We can access it under the by-number path. This approach allows us to deploy multiple Funqy functions on a single Azure Function.

public class AccountFunctionResource {

    @Inject
    Logger log;

    @Funq("add-account")
    @Transactional
    public Account addAccount(Account account) {
        log.infof("Add: %s", account);
        Account.persist(account);
        return account;
    }

    @Funq("by-number")
    public Account findByNumber(Account account) {
        log.infof("Find: %s", account.number);
        return Account
                .find("number", account.number)
                .singleResult();
    }
}

Running Azure Functions Locally with Quarkus

Before we deploy our functions on Azure, we can run and test them locally. I assume you have already the Azure Functions Core Tools according to the “Prerequisites” section. Firstly, we need to build the app with the following Maven command:

$ mvn clean package

Then, we can take advantage of Quarkus Azure Extension and use the following Maven command to run the app in Azure Functions local environment:

$ mvn quarkus:run

Here’s the output after running the command visible above. As you see, there is just a single Azure function QuarkusHttp, although we have two methods annotated with @Funq. Quarkus allows us to invoke multiple Funqy functions using a single, wildcarded route http://localhost:8081/api/{*path}.

quarkus-azure-function-local

All the required Azure Function configuration files like host.json, local.settings.json and function.json are autogenerated by Quarkus during the build. You can find them in the target/azure-functions directory.

Here’s the auto-generated function.json with our Azure Function definition:

{
  "scriptFile" : "../account-function-1.0.jar",
  "entryPoint" : "io.quarkus.azure.functions.resteasy.runtime.Function.run",
  "bindings" : [ {
    "type" : "httpTrigger",
    "direction" : "in",
    "name" : "req",
    "route" : "{*path}",
    "methods" : [ "GET", "HEAD", "POST", "PUT", "OPTIONS" ],
    "dataType" : "binary",
    "authLevel" : "ANONYMOUS"
  }, {
    "type" : "http",
    "direction" : "out",
    "name" : "$return"
  } ]
}

Let’s call our local function. In the first step, we will add a new account by calling the addAccount function:

$ curl http://localhost:8081/api/add-account \
    -d "{\"number\":\"124\",\"customerId\":1, \"balance\":1000}" \
    -H "Content-Type: application/json"

Then, we can find the account by its number. For GET requests, the Funqy HTTP Binding allows to use of a query parameter mapping for function input parameters. The query parameter names are mapped to properties on the bean class.

$ curl http://localhost:8081/api/by-number?number=124

Deploy Quarkus Serverless on Azure Functions

Finally, we can deploy our sample Quarkus serverless app on Azure. As you probably remember, we already have all the required settings in the application.properties file. So now, we just need to run the following Maven command:

$ mvn quarkus:deploy

Here’s the output of the command. As you see, there is still one Azure Function with a wildcard in the path.

Let’s switch to the Azure Portal. Here’s a page with the pminkows-account-function details:

quarkus-azure-function-portal

We can call a similar query several times with different input data to test the service:

$ curl https://pminkows-account-function.azurewebsites.net/api/add-account \
    -d "{\"number\":\"127\",\"customerId\":4, \"balance\":1000}" \
    -H "Content-Type: application/json"

Here’s the invocation history visible in the Azure Monitor for our QuarkusHttp function.

Final Thoughts

In this article, I’m showing you a simplified scenario of running a Quarkus serverless app on Azure Function. You don’t need to know much about Azure Function to run such a service, since Quarkus handles all the required things around for you.

The post Serverless on Azure Function with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/feed/ 2 14865
Serverless on Azure with Spring Cloud Function https://piotrminkowski.com/2024/01/19/serverless-on-azure-with-spring-cloud-function/ https://piotrminkowski.com/2024/01/19/serverless-on-azure-with-spring-cloud-function/#respond Fri, 19 Jan 2024 09:25:49 +0000 https://piotrminkowski.com/?p=14829 This article will teach you how to create and run serverless apps on Azure using the Spring Cloud Function and Spring Cloud Azure projects. We will integrate with the Azure Functions and Azure Event Hubs services. It is not my first article about Azure and Spring Cloud. As a preparation for that exercise, it is […]

The post Serverless on Azure with Spring Cloud Function appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and run serverless apps on Azure using the Spring Cloud Function and Spring Cloud Azure projects. We will integrate with the Azure Functions and Azure Event Hubs services.

It is not my first article about Azure and Spring Cloud. As a preparation for that exercise, it is worth reading the article to familiarize yourself with some interesting features of Spring Cloud Azure. It describes an integration with Azure Spring Apps, Cosmos DB, and App Configuration services. On the other hand, if you are interested in CI/CD for Spring Boot apps you can refer to the following article about Azure DevOps and Terraform.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Spring Boot apps used in the article are located in the serverless directory. After you go to that directory you should just follow my further instructions.

Architecture

In this exercise, we will prepare two sample Spring Boot apps (aka functions) account-function and customer-function. Then, we will deploy them on the Azure Function service. Our apps do not communicate with each other directly but through the Azure Event Hubs service. Event Hubs is a cloud-native data streaming service compatible with the Apache Kafka API. After adding a new customer the customer-function sends an event to the Azure Event Hubs using the Spring Cloud Stream binder. The account-function app receives the event through the Azure Event Hubs trigger. Also, the customer-function is exposed to the external client through the Azure HTTP trigger. Here’s the diagram of our architecture:

azure-serverless-spring-cloud-arch

Prerequisites

There are some prerequisites before you start the exercise. You need to install JDK17+ and Maven on your local machine. You also need to have an account on Azure and az CLI to interact with that account. Once you install the az CLI and log in to Azure you can execute the following command for verification:

$ az account show

If you would like to test Azure Functions locally, you need to install Azure Functions Core Tools. You can find detailed installation instructions in Microsoft Docs here. For macOS, there are three required commands to run:

$ brew tap azure/functions
$ brew install azure-functions-core-tools@4
$ brew link --overwrite azure-functions-core-tools@4

Create Resources on Azure

Before we proceed with the source code, we need to create several required resources on the Azure cloud. In the first step, we will prepare a resource group for all required objects. The name of the group is spring-cloud-serverless. The location depends on your preferences. For me it is eastus.

$ az group create -l eastus -n spring-cloud-serverless

In the next step, we need to create a storage account. The Azure Function service requires it, but we will also use that account during the local development with Azure Functions Core Tools.

$ az storage account create -n pminkowsserverless \
     -g spring-cloud-serverless \
     -l eastus \
     --sku Standard_LRS

In order to run serverless apps on Azure with e.g. Spring Cloud Function, we need to create the Azure Function App instances. Of course, we use the previously created resource group and storage account. The name of my Function App instances are pminkows-account-function and pminkows-customer-function. We can also set a default OS type (Linux), functions version (4), and a runtime stack (Java) for each Function App.

$ az functionapp create -n pminkows-customer-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g spring-cloud-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

$ az functionapp create -n pminkows-account-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g spring-cloud-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

Then, we have to create the Azure Event Hubs namespace. The name of my namespace is spring-cloud-serverless. The same as before I choose the East US location and the spring-cloud-serverless resource group. We can also set the pricing tier (Standard) and the upper limit of throughput units when the AutoInflate option is enabled.

$ az eventhubs namespace create -n spring-cloud-serverless \
     -g spring-cloud-serverless \
     --location eastus \
     --sku Standard \
     --maximum-throughput-units 1 \
     --enable-auto-inflate true

Finally, we have to create topics on Event Hubs. Of course, they have to be assigned to the previously created spring-cloud-serverless Event Hubs namespace. The names of our topics are accounts and customers. The number of partitions is irrelevant in this exercise.

$ az eventhubs eventhub create -n accounts \
     -g spring-cloud-serverless \
     --namespace-name spring-cloud-serverless \
     --cleanup-policy Delete \
     --partition-count 3

$ az eventhubs eventhub create -n customers \
     -g spring-cloud-serverless \
     --namespace-name spring-cloud-serverless \
     --cleanup-policy Delete \
     --partition-count 3

Now, let’s switch to the Azure Portal. Find the spring-cloud-serverless resource group. You should have the same list of resources inside this group as shown below. It means that our environment is ready and we can proceed to the source code.

App Dependencies

Firstly, we need to declare the dependencyManagement section inside the Maven pom.xml for three projects used in the app implementation: Spring Boot, Spring Cloud, and Spring Cloud Azure.

<properties>
  <java.version>17</java.version>
  <spring-boot.version>3.1.4</spring-boot.version>
  <spring-cloud-azure.version>5.7.0</spring-cloud-azure.version>
  <spring-cloud.version>2022.0.4</spring-cloud.version>
  <maven.compiler.release>${java.version}</maven.compiler.release>
  <maven.compiler.source>${java.version}</maven.compiler.source>
  <maven.compiler.target>${java.version}</maven.compiler.target>
</properties>

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-dependencies</artifactId>
      <version>${spring-cloud.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
    <dependency>
      <groupId>com.azure.spring</groupId>
      <artifactId>spring-cloud-azure-dependencies</artifactId>
      <version>${spring-cloud-azure.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-dependencies</artifactId>
      <version>${spring-boot.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-maven-plugin</artifactId>
      <executions>
       <execution>
          <goals>
           <goal>repackage</goal>
          </goals>
         </execution>
       </executions>
    </plugin>
  </plugins>
</build>

Here’s the list of required Maven dependencies. We need to include the spring-cloud-function-context library to enable Spring Cloud Functions. In order to integrate with the Azure Functions service, we need to include the spring-cloud-function-adapter-azure extension. Our apps also send messages to the Azure Event Hubs service through the dedicated Spring Cloud Stream binder.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-function-adapter-azure</artifactId>
  </dependency>
  <dependency>
    <groupId>com.azure.spring</groupId>
    <artifactId>spring-cloud-azure-stream-binder-eventhubs</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-function-context</artifactId>
  </dependency>
</dependencies>

Create Azure Serverless Apps with Spring Cloud

Expose Azure Function as HTTP endpoint

Let’s begin with the customer-function. To simplify the app we will use an in-memory H2 for storing data. Each time a new customer is added, it is persisted in the in-memory database using Spring Data JPA extension. Here’s our entity class:

@Entity
public class Customer {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Long id;
   private String name;
   private int age;
   private String status;

   // GETTERS AND SETTERS
}

Here’s the Spring Data repository interface for the Customer entity:

public interface CustomerRepository extends ListCrudRepository<Customer, Long> {
}

Let’s proceed to the Spring Cloud Functions implementation. The CustomerInternalFunctions bean defines two functions. The addConsumer function (1) persists a new customer in the database and sends the event about it to the Azure Event Hubs topic. It uses the Spring Cloud Stream StreamBridge bean for interacting with Event Hubs. It also returns the persisted Customer entity as a response. The second function changeStatus (2) doesn’t return any data, it just needs to react to the incoming event and change the status of the particular customer. That event is sent by the account-function app after generating an account for a newly created customer. The important information here is that those are just Spring Cloud functions. For now, they have nothing to do with Azure Functions.

@Service
public class CustomerInternalFunctions {

   private static final Logger LOG = LoggerFactory
      .getLogger(CustomerInternalFunctions.class);

   private StreamBridge streamBridge;
   private CustomerRepository repository;

   public CustomerInternalFunctions(StreamBridge streamBridge, 
                                    CustomerRepository repository) {
      this.streamBridge = streamBridge;
      this.repository = repository;
   }

   // (1)
   @Bean
   public Function<Customer, Customer> addCustomer() {
      return c -> {
         Customer newCustomer = repository.save(c);
         streamBridge.send("customers-out-0", newCustomer);
         LOG.info("New customer added: {}", c);
         return newCustomer;
      };
   }

   // (2)
   @Bean
   public Consumer<Account> changeStatus() {
      return account -> {
         Customer customer = repository.findById(account.getCustomerId())
                .orElseThrow();
         customer.setStatus(Customer.CUSTOMER_STATUS_ACC_ACTIVE);
         repository.save(customer);
         LOG.info("Customer activated: id={}", customer.getId());
      };
   }

}

We also need to provide several configuration properties in the Spring Boot application.properties file. We should set the Azure Event Hubs connection URL and the name of the target topic. Since our app uses Spring Cloud Stream only for sending events we should also turn off the autodiscovery of functional beans as messaging bindings.

spring.cloud.azure.eventhubs.connection-string = ${EVENT_HUBS_CONNECTION_STRING}
spring.cloud.stream.bindings.customers-out-0.destination = customers
spring.cloud.stream.function.autodetect = false

In order to expose the functions on Azure, we will take advantage of the Spring Cloud Function Azure Adapter. It includes several Azure libraries to our Maven dependencies. It can invoke the Spring Cloud Functions directly or through the lookup approach with the FunctionCatalog bean (1). The method has to be annotated with @FunctionName, which defines the name of the function in Azure (2). In order to expose that function over HTTP, we need to define the Azure @HttpTrigger (3). The trigger exposes the function as the POST endpoint and doesn’t require any authorization. Our method receives the request through the HttpRequestMessage object. Then, it invokes the Spring Cloud function addCustomer by name using the FunctionCatalog bean (4).

// (1)
@Autowired
private FunctionCatalog functionCatalog;

@FunctionName("add-customer") // (2)
public Customer addCustomerFunc(
        // (3)
        @HttpTrigger(name = "req",
                     methods = { HttpMethod.POST },
                     authLevel = AuthorizationLevel.ANONYMOUS)
        HttpRequestMessage<Optional<Customer>> request,
        ExecutionContext context) {
   Customer c = request.getBody().orElseThrow();
   context.getLogger().info("Request: {}" + c);
   // (4)
   Function<Customer, Customer> function = functionCatalog
      .lookup("addCustomer");
   return function.apply(c);
}

Integrate Function with Azure Data Hubs Trigger

Let’s switch to the account-function app directory inside our Git repository. The same as customer-function it uses an in-memory H2 database for storing customer accounts. Here’s our Account entity class:

@Entity
public class Account {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String number;
    private Long customerId;
    private int balance;

   // GETTERS AND SETTERS ...
}

Inside the account-function app, there is a single Spring Cloud Function addAccount. It generates a new 16-digit account number for each new customer. Then, it saves the account in the database and sends the event to the Azure Event Hubs with Spring Cloud Stream StreamBridge bean.

@Service
public class AccountInternalFunctions {

   private static final Logger LOG = LoggerFactory
      .getLogger(AccountInternalFunctions.class);

   private StreamBridge streamBridge;
   private AccountRepository repository;

   public AccountInternalFunctions(StreamBridge streamBridge, 
                                   AccountRepository repository) {
      this.streamBridge = streamBridge;
      this.repository = repository;
   }

   @Bean
   public Function<Customer, Account> addAccount() {
      return customer -> {
         String n = RandomStringUtils.random(16, false, true);
         Account a = new Account(n, customer.getId(), 0);
         a = repository.save(a);
         boolean b = streamBridge.send("accounts-out-0", a);
         LOG.info("New account added: {}", a);
         return a;
      };
   }

}

The same as for customer-function we need to provide some configuration settings for Spring Cloud Stream. However, instead of the customers topic, this time we are sending events to the accounts topic.

spring.cloud.azure.eventhubs.connection-string = ${EVENT_HUBS_CONNECTION_STRING}
spring.cloud.stream.bindings.accounts-out-0.destination = accounts
spring.cloud.stream.function.autodetect = false

The account-function app is not exposed as the HTTP endpoint. We want to trigger the function in reaction to the event delivered to the Azure Event Hubs topic. The name of our Azure function is new-customer (1). It receives the Customer event from the customers topic thanks to the @EventHubTrigger annotation (2). This annotation also defines a property name, that contains the address of the Azure Event Hubs namespace (EVENT_HUBS_CONNECTION_STRING). I’ll show you later how to set such a property for our function in Azure. Once, the new-customer function is triggered, it invokes the Spring Cloud Function addAccount (3).

@Autowired
private FunctionCatalog functionCatalog;

// (1)
@FunctionName("new-customer")
public void newAccountEventFunc(
        // (2)
        @EventHubTrigger(eventHubName = "customers",
                         name = "newAccountTrigger",
                         connection = "EVENT_HUBS_CONNECTION_STRING",
                         cardinality = Cardinality.ONE)
        Customer event,
        ExecutionContext context) {
   context.getLogger().info("Event: " + event);
   // (3)
   Function<Customer, Account> function = functionCatalog
      .lookup("addAccount");
   function.apply(event);
}

At the end of this section, let’s switch back once again to the customer-function app. It fires on the event sent by the new-customer function to the accounts topic. Therefore we are using the @EventHubTrigger annotation once again.

@FunctionName("activate-customer")
public void activateCustomerEventFunc(
          @EventHubTrigger(eventHubName = "accounts",
                 name = "changeStatusTrigger",
                 connection = "EVENT_HUBS_CONNECTION_STRING",
                 cardinality = Cardinality.ONE)
          Account event,
          ExecutionContext context) {
   context.getLogger().info("Event: " + event);
   Consumer<Account> consumer = functionCatalog.lookup("changeStatus");
   consumer.accept(event);
}

All our functions are ready. Now, we can proceed to the deployment phase.

Running Azure Functions Locally with Maven

Before we deploy our functions on Azure, we can run and test them locally. I assume you have already the Azure Functions Core Tools according to the “Prerequisites” section. Our function still needs to connect with some services on the cloud like Azure Event Hubs or the storage account. The address to the Azure Event Hubs should be set as the EVENT_HUBS_CONNECTION_STRING app property or environment variable. In order to find the Event Hubs connection string, we should switch to the Azure Portal and find the spring-cloud-serverless namespace. Then, we need to go to the “Shared access policies” menu item and click the “RootManagedSharedAccessKey” policy. The connection string value is available in the “Connection string-primary key” field.

We also need to obtain the connection credentials to the Azure storage account. In your storage account, you should find the “Access keys” section and copy the value from the “Connection string” field.

After that, we can create the local.settings.json file in each app’s root directory. Alternatively, we can just set the environment variables AzureWebJobsStorage and EVENT_HUBS_CONNECTION_STRING.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": <YOUR_ACCOUNT_STORAGE_CONNECTION_STRING>,
    "FUNCTIONS_WORKER_RUNTIME": "java",
    "EVENT_HUBS_CONNECTION_STRING": <YOUR_EVENT_HUBS_CONNECTION_STRING>
  }
}

Once you place your credentials in the local.settings.json file, you can build the app with Maven.

$ mvn clean package

After that, you can use the Maven plugin included in the spring-cloud-function-adapter-azure module. In order to run the function, you need to execute the following command:

$ mvn azure-functions:run 

Here’s the command output for the customer-function app. As you see, it contains two Azure functions: add-customer (HTTP) and activate-customer (Event Hub Trigger). You can test the function by invoking the http://localhost:7071/api/add-customer URL.

Here’s the command output for the account-function app. As you see, it contains a single function new-customer activated through the Event Hub trigger.

Deploy Spring Cloud Serverless on Azure Functions

Let’s take a look at the pminkows-customer-function Azure Function App before we deploy our first app there. The http://pminkows-customer-function.azurewebsites.net base URL will precede all the HTTP endpoint URLs for our functions. We should remember the name of the automatically generated service plan (EastUSLinuxDynamicPlan).

Before deploying our Spring Boot app we should create the host.json file with the following content:

{
  "version": "2.0",
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[4.*, 5.0.0)"
  }
}

Then, we should add the azure-functions-maven-plugin Maven plugin to our pom.xml. In the configuration section, we have to set the name of the Azure Function App instance (pminkows-customer-function), the target resource group (spring-cloud-serverless), the region (eastus), the service plan (EastUSLinuxDynamicPlan), and the location of the host.json file. We also need to set the connection string to the Azure Event Hubs inside the EVENT_HUBS_CONNECTION_STRING app property. So before running the build, you should export the value of that environment variable.

<plugin>
  <groupId>com.microsoft.azure</groupId>
  <artifactId>azure-functions-maven-plugin</artifactId>
  <version>1.30.0</version>
  <configuration>
    <appName>pminkows-customer-function</appName>
    <resourceGroup>spring-cloud-serverless</resourceGroup>
    <region>eastus</region>
    <appServicePlanName>EastUSLinuxDynamicPlan</appServicePlanName>
    <hostJson>${project.basedir}/src/main/resources/host.json</hostJson>
    <runtime>
      <os>linux</os>
      <javaVersion>17</javaVersion>
    </runtime>
    <appSettings>
      <property>
        <name>FUNCTIONS_EXTENSION_VERSION</name>
        <value>~4</value>
      </property>
      <property>
        <name>EVENT_HUBS_CONNECTION_STRING</name>
        <value>${EVENT_HUBS_CONNECTION_STRING}</value>
      </property>
    </appSettings>
  </configuration>
  <executions>
    <execution>
      <id>package-functions</id>
      <goals>
        <goal>package</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Let’s begin with the customer-function app. Before deploying the app we need to build it first with the mvn clean package command. Once you do it you can run your first function Azure with the following command:

$ mvn azure-functions:deploy

Here’s my output after running that command. As you see, the function add-customer is exposed under the http://pminkows-customer-function.azurewebsites.net/api/add-customer URL.

azure-serverless-spring-cloud-deploy-mvn

We can come back to the pminkows-customer-function Azure Function App in the portal. According to the expectations, two functions are running there. The activate-customer function is triggered by the Azure Event Hub.

azure-serverless-spring-cloud-functions

Let’s switch to the account-function directory. We will deploy it to the Azure pminkows-account-function function. Once again we need to build the app with mvn clean package command, and then deploy it using the mvn azure-functions:deploy command. Here’s the output. There are no HTTP triggers defined, but just a single function triggered by the Azure Event Hub.

Here are the details of the pminkows-account-function Function App in the Azure Portal.

Invoke Azure Functions

Finally, we can test our functions by calling the following endpoint using e.g. curl. Let’s repeat the similar command several times with different data:

$ curl https://pminkows-customer-function.azurewebsites.net/api/add-customer \ 
    -d "{\"name\":\"Test\",\"age\":33}" \
    -H "Content-Type: application/json"

After that, we should switch to the Azure Portal. Go to the pminkows-customer-function details and click the link “Invocation and more” on the add-customer function.

You will be redirected to the Azure Monitor statistics for that function. Azure Monitor displays a list of invocations with statuses.

azure-serverless-spring-cloud-functions-invocations

We can click one of the records from the invocation history to see the details.

As you probably remember, the add-customer function sends messages to Azure Event Hubs. On the other hand, we can also verify how the new-customer function in the pminkows-account-function consumes and handles those events.

azure-serverless-spring-cloud-functions-logs

Final Thoughts

This article gives you a comprehensive guide on how to build and run Spring Cloud serverless apps on Azure Functions. It explains the concept of the triggers in Azure Functions and shows the integration with Azure Event Hubs. Finally, it shows how to run such functions locally and then monitor them on Azure after deployment.

The post Serverless on Azure with Spring Cloud Function appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/19/serverless-on-azure-with-spring-cloud-function/feed/ 0 14829
Serverless on OpenShift with Knative, Quarkus and Kafka https://piotrminkowski.com/2023/04/18/serverless-on-openshift-with-knative-quarkus-and-kafka/ https://piotrminkowski.com/2023/04/18/serverless-on-openshift-with-knative-quarkus-and-kafka/#comments Tue, 18 Apr 2023 09:02:49 +0000 https://piotrminkowski.com/?p=14100 In this article, you will learn how to build and run Quarkus serverless apps on OpenShift and integrate them through Knative Eventing. We will use Kafka to exchange messages between the apps. However, Knative supports various event sources. Kafka is just one of the available options. You can check out a full list of supported […]

The post Serverless on OpenShift with Knative, Quarkus and Kafka appeared first on Piotr's TechBlog.

]]>

In this article, you will learn how to build and run Quarkus serverless apps on OpenShift and integrate them through Knative Eventing. We will use Kafka to exchange messages between the apps. However, Knative supports various event sources. Kafka is just one of the available options. You can check out a full list of supported solutions in the Knative Eventing docs.

I have already published several articles about Knative on my blog. If you want a brief start read my article about Knative basics and Spring Boot. There is also a similar article to the current one more focused on Kubernetes. Today, we will focus more on the OpenShift support for the serverless features. Also, Knative is changing dynamically, so there are some significant differences in comparison to the version described in my previous articles.

Although I’m running my example apps on OpenShift, I’ll give you a recipe for how to do the same thing on vanilla Kubernetes. In order to run them on Kubernetes, you need to activate the kubernetes Maven profile instead of openshift during the build.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

How it works

During the exercise, we will deploy three Quarkus apps on OpenShift: order-service, stock-service and payment-service. All these apps are exposing a single HTTP POST endpoint for incoming events. They are also using Quarkus REST client for sending events to Kafka through the Knative Eventing. The order-service app is sending a single event that both should receive stock-service and payment-service. Then they are processing the event and send a response back to the order-service. All those things happen asynchronously by leveraging Kafka and Knative Broker. However, that process is completely transparent for the apps, which just expose the HTTP endpoint and use the HTTP client to call the endpoint exposed by the KafkaSink object.

The diagram is visible below illustrates the architecture of our solution. There are several Knative objects: KafkaSink, KafkaSource, Trigger, and Broker. The KafkaSink object eliminates the need to use Kafka client on the app side. It receives HTTP requests in CloudEvent format and converts them to the Kafka message sent to the particular topic. The KafkaSource object receives messages from Kafka and sends them to the Knative Broker. Finally, we need to define Trigger. The Trigger object filters the events inside Broker and sends them to the target app by calling its HTTP endpoint.

openshift-serverless-arch

Prerequisites

Before we proceed to the Quarkus apps, we need to install and configure two operators on OpenShift: AMQ Streams (Kafka Strimzi) and OpenShift Serverless (Knative).

openshift-serverless-operators

As a configuration phase, I define the creation of four components: Kafka, KnativeServing, KnativeEventing and KnativeKafka. We can easily create them using OpenShift Console. In all cases, we can leave the default settings. I create the Kafka instance in the kafka namespace. Just in case, here’s the YAML manifest for creating a 3-node Kafka cluster:

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: kafka
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.3'
    storage:
      type: ephemeral
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.3.1
    replicas: 3
  zookeeper:
    storage:
      type: ephemeral
    replicas: 3

The KnativeServing should be created in the knative-serving namespace, while KnativeEventing in the knative-eventing namespace.

kind: KnativeServing
apiVersion: operator.knative.dev/v1beta1
metadata:
  name: knative-serving
  namespace: knative-serving
spec: {}
---
kind: KnativeEventing
apiVersion: operator.knative.dev/v1beta1
metadata:
  name: knative-eventing
  namespace: knative-eventing
spec: {}

Or just “click” the create button in OpenShift Console.

Finally, the last required component – KnativeKafka. We should at least enable the sink and source to install KafkaSink and KafkaSource CRDs and controllers.

apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
  name: knative-kafka
  namespace: knative-eventing
spec:
  logging:
    level: INFO
  sink:
    enabled: true
  source:
    enabled: true

Functions Support in Quarkus

Although we will implement event-driven architecture today, our Quarkus apps are just exposing and calling HTTP endpoints. In order to expose the method as the HTTP endpoint we need to include the Quarkus Funqy HTTP module. On the other hand, to call the HTTP endpoint exposed by another component we can leverage Quarkus declarative REST client. Our app is storing data in the in-memory H2 database and uses Panache ORM.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-funqy-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-rest-client-jackson</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

In order to expose the method as the HTTP endpoint we just need to annotate it with @Funq. Here’s the function implementation from the payment-service. It receives two types of orders: reservation and confirmation. For the reservation order reservation (status=NEW) it reserves funds in the customer account. For the confirmation type, it accepts or rollbacks the transaction depending on the order’s status. By default, the method annotated with the @Funq annotation is exposed under the same path as its name – in our case the address of the endpoint is POST /reserve.

public class OrderReserveFunction {

   private static final String SOURCE = "payment";
    
   Logger log;
   OrderReserveService orderReserveService;
   OrderConfirmService orderConfirmService;

   public OrderReserveFunction(Logger log,
                               OrderReserveService orderReserveService,
                               OrderConfirmService orderConfirmService) {
      this.log = log;
      this.orderReserveService = orderReserveService;
      this.orderConfirmService = orderConfirmService;
   }

   @Funq
   public Customer reserve(Order order) {
      log.infof("Received order: %s", order);
      if (order.getStatus() == OrderStatus.NEW) {
         return orderReserveService.doReserve(order);
      } else {
         return orderConfirmService.doConfirm(order);
      }
   }

}

Let’s take a look at the payment reservation implementation. We assume multiple incoming requests to the same concurrently, so we need to lock the entity during the transaction. Once the reservation is performed, we need to send a response back to the order-service. We are leveraging Quarkus REST client for that.

@ApplicationScoped
public class OrderReserveService {

    private static final String SOURCE = "payment";

    Logger log;
    CustomerRepository repository;
    OrderSender sender;

    public OrderReserveService(Logger log,
                               CustomerRepository repository,
                               @RestClient OrderSender sender) {
        this.log = log;
        this.repository = repository;
        this.sender = sender;
    }

    @Transactional
    public Customer doReserve(Order order) {
        Customer customer = repository.findById(order.getCustomerId(), LockModeType.PESSIMISTIC_WRITE);
        if (customer == null)
            throw new NotFoundException();
        log.infof("Customer: %s", customer);
        if (order.getAmount() < customer.getAmountAvailable()) {
            order.setStatus(OrderStatus.IN_PROGRESS);
            customer.setAmountReserved(customer.getAmountReserved() + order.getAmount());
            customer.setAmountAvailable(customer.getAmountAvailable() - order.getAmount());
        } else {
            order.setStatus(OrderStatus.REJECTED);
        }
        order.setSource(SOURCE);
        repository.persist(customer);
        log.infof("Order reserved: %s", order);
        sender.send(order);
        return customer;
    }
}

Here’s the implementation of our REST client. It sends a message to the endpoint exposed by the KafkaSink object. The path of the endpoint corresponds to the name of the KafkaSink object. We also need to set HTTP headers to meet the CloudEvent format. Therefore we are registering the custom ClientHeadersFactory implementation.

@ApplicationScoped
@RegisterRestClient
@RegisterClientHeaders(CloudEventHeadersFactory.class)
public interface OrderSender {

    @POST
    @Path("/payment-sink")
    void send(Order order);

}

Our custom ClientHeadersFactory implementation sets some Ce-* (CloudEvent) headers. The most important header is Ce-Type and Ce-Source since we will do filtering based on that values then.

@ApplicationScoped
public class CloudEventHeadersFactory implements ClientHeadersFactory {

    AtomicLong id = new AtomicLong();

    @Override
    public MultivaluedMap<String, String> update(MultivaluedMap<String, String> incoming,
                                                 MultivaluedMap<String, String> outgoing) {
        MultivaluedMap<String, String> result = new MultivaluedHashMap<>();
        result.add("Ce-Id", String.valueOf(id.incrementAndGet()));
        result.add("Ce-Specversion", "1.0");
        result.add("Ce-Type", "reserve-event");
        result.add("Ce-Source", "stock");
        return result;
    }

}

Finally, let’s take a look at the payment confirmation service:

@ApplicationScoped
public class OrderConfirmService {

    private static final String SOURCE = "payment";

    Logger log;
    CustomerRepository repository;

    public OrderConfirmService(Logger log, 
                               CustomerRepository repository) {
        this.log = log;
        this.repository = repository;
    }

    @Transactional
    public Customer doConfirm(Order order) {
        Customer customer = repository.findById(order.getCustomerId());
        if (customer == null)
            throw new NotFoundException();
        log.infof("Customer: %s", customer);
        if (order.getStatus() == OrderStatus.CONFIRMED) {
            customer.setAmountReserved(customer.getAmountReserved() - order.getAmount());
            repository.persist(customer);
        } else if (order.getStatus() == OrderStatus.ROLLBACK && !order.getRejectedService().equals(SOURCE)) {
            customer.setAmountReserved(customer.getAmountReserved() - order.getAmount());
            customer.setAmountAvailable(customer.getAmountAvailable() + order.getAmount());
            repository.persist(customer);
        }
        return customer;
    }
    
}

Also, let’s take a look at the implementation of the function in the order-service.

public class OrderConfirmFunction {

    private final Logger log;
    private final OrderService orderService;

    public OrderConfirmFunction(Logger log, OrderService orderService) {
        this.log = log;
        this.orderService = orderService;
    }

    @Funq
    public void confirm(Order order) {
        log.infof("Accepted order: %s", order);
        orderService.doConfirm(order);
    }

}

Here’s the function implementation for the stock-service:

public class OrderReserveFunction {

    private static final String SOURCE = "stock";

    private final OrderReserveService orderReserveService;
    private final OrderConfirmService orderConfirmService;
    private final Logger log;

    public OrderReserveFunction(OrderReserveService orderReserveService,
                                OrderConfirmService orderConfirmService,
                                Logger log) {
        this.orderReserveService = orderReserveService;
        this.orderConfirmService = orderConfirmService;
        this.log = log;
    }

    @Funq
    public void reserve(Order order) {
        log.infof("Received order: %s", order);
        if (order.getStatus() == OrderStatus.NEW) {
            orderReserveService.doReserve(order);
        } else {
            orderConfirmService.doConfirm(order);
        }
    }

}

Configure Knative Eventing

After we finished the implementation of app logic we can proceed to the configuration of OpenShift Serverless and Knative components. If you are using my GitHub repository you don’t have to manually apply any YAML manifests. All the required configuration is applied during the Maven build. It is possible thanks to the Quarkus Kubernetes extension and its support for Knative. We just need to place all the required YAML manifests inside the src/main/kubernetes/knative.yml and the magic happens by itself.

However, in order to understand what happens let’s discuss step-by-step Knative configuration. In the first step, we need to create the KafkaSink objects. KafkaSink exposes the HTTP endpoint and gets CloudEvent on input. Then it sends that event to the particular topic in the Kafka cluster. Here’s the KafkaSink definition for the payment-service:

apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
metadata:
  name: payment-sink
  namespace: demo-eventing
spec:
  bootstrapServers:
    - my-cluster-kafka-bootstrap.kafka:9092
  topic: reserve-events
  numPartitions: 1

Both payment-service and stock-service send messages on the same reserve-events topic. Therefore, we can also create a single KafkaSink per those two services (I created two sinks, each of them dedicated to the single app). On the other hand, the order-service app sends messages to the order-events topic, so we have to create a separate KafkaSink:

apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
metadata:
  name: order-sink
  namespace: demo-eventing
spec:
  bootstrapServers:
    - my-cluster-kafka-bootstrap.kafka:9092
  topic: order-events
  numPartitions: 1

After that, let’s print the list of sinks in our cluster:

Now, this URL address should be set in the application properties for the REST client configuration. Here’s the fragment of the Quarkus application.properties:

%openshift.quarkus.rest-client."pl.piomin.samples.quarkus.serverless.order.client.OrderSender".url = http://kafka-sink-ingress.knative-eventing.svc.cluster.local/demo-eventing

With KafkaSink we are able to send messages to the Kafka cluster. In order to receive them on the target apps side we need to create other objects. In the first step, we will create Knative Broker and KafkaSource object. The broker may be easily created using kn CLI:

$ kn broker create default

The KafkaSource object connects to the Kafka cluster and receives messages from the defined list of topics. In our case, these are order-events and reserve-events. The output of the KafkaSource object is the already-created default broker. It means that all the messages exchanged between our three apps are delivered to the Knative Broker.

apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
  name: kafka-source-to-broker
spec:
  bootstrapServers:
    - my-cluster-kafka-bootstrap.kafka:9092
  topics:
    - order-events
    - reserve-events
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1
      kind: Broker
      name: default

In the final step, we need to configure the mechanism responsible for getting messages from Knative Broker and sending them to the target services. In order to do that, we have to create Trigger objects. A trigger can filter messages by CloudEvent attributes. Cloud event attributes are related to the Ce-* HTTP headers from the request. For example, the payment-service app receives only messages sent by the order-service and containing the order-event type.

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: payment-trigger
spec:
  broker: default
  filter:
    attributes:
      source: order
      type: order-event
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: payment-service
    uri: /reserve

The stock-trigger object is very similar. It connects to the default Broker and gets only messages with the source=order and type=order-event. Finally, it calls the POST /reserve endpoint exposed by the stock-service.

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: stock-trigger
spec:
  broker: default
  filter:
    attributes:
      source: order
      type: order-event
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: stock-service
    uri: /reserve

On the other hand, the order-service app should receive events from both stock-service and payment-service. Therefore, we are filtering messages just by the type attribute. The target endpoint of the order-service is POST /confirm.

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: order-trigger
spec:
  broker: default
  filter:
    attributes:
      type: reserve-event
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: order-service
    uri: /confirm

Run Quarkus Apps on Knative

We can leverage the Quarkus Kubernetes/OpenShift extension to run the app as a Knative service. In order to do that we need to include the quarkus-openshift dependency in Maven pom.xml. We would like to use that module during the build only if we need to deploy the app on the cluster. Therefore, we will create a custom Maven profile openshift. Besides including the quarkus-openshift dependency it also enables deployment by setting the quarkus.kubernetes.deploy property to true and activates the custom Quarkus profile openshift.

<profiles>
  <profile>
    <id>openshift</id>
    <activation>
      <property>
        <name>openshift</name>
      </property>
    </activation>
    <dependencies>
      <dependency>
        <groupId>io.quarkus</groupId>
        <artifactId>quarkus-openshift</artifactId>
      </dependency>
    </dependencies>
    <properties>
      <quarkus.kubernetes.deploy>true</quarkus.kubernetes.deploy>
      <quarkus.profile>openshift</quarkus.profile>
    </properties>
  </profile>
</profiles>

Once we include the quarkus-openshift module, we may use Quarkus configuration properties to customize the deployment process. Firstly, we need to set the quarkus.kubernetes.deployment-target property to knative. Thanks to that Quarkus will automatically generate the YAML manifest with Knative Service instead of a standard Kubernetes Deployment. We can also override default autoscaling settings with the quarkus.knative.revision-auto-scaling.* properties. The whole build process is running on the cluster with S2I (source-2-image), so we can use the internal OpenShift registry (the quarkus.container-image.registry property). Here’s the fragment of the application.properties file for the order-service.

quarkus.kubernetes-client.trust-certs = true
quarkus.kubernetes.deployment-target = knative
quarkus.knative.env.vars.tick-timeout = 10000
quarkus.knative.revision-auto-scaling.metric = rps
quarkus.knative.revision-auto-scaling.target = 50

%openshift.quarkus.container-image.group = demo-eventing
%openshift.quarkus.container-image.registry = image-registry.openshift-image-registry.svc:5000
%openshift.app.orders.timeout = ${TICK_TIMEOUT}

Finally, we just need to activate the openshift profile during the build and all the apps will be deployed to the target OpenShift cluster. You can deploy a single app or all the apps by running the following command in the repository root directory:

$ mvn clean package -Popenshift

Once we deploy our apps we display a list of Knative services.

openshift-serverless-knative-svc

We can also verify if all the triggers have been configured properly.

Also, let’s take a look at the “Topology” view on OpenShift which illustrates our serverless architecture.

openshift-serverless-topology

Testing Services

It is also worth creating some automated tests to verify the basic functionality before deployment. Since we have simple HTTP apps and an in-memory H2 database we can create standard tests. The only thing we need to do is to mock the HTTP client.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.rest-assured</groupId>
  <artifactId>rest-assured</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5-mockito</artifactId>
  <scope>test</scope>
</dependency>

Here’s the JUnit test for the payment-service. We are verifying both the reservation and confirmation processes for the same order.

@QuarkusTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class OrderReserveFunctionTests {

    private static int amount;
    private CustomerRepository repository;
    @InjectMock
    @RestClient
    OrderSender sender;

    public OrderReserveFunctionTests(CustomerRepository repository) {
        this.repository = repository;
    }

    @Test
    @org.junit.jupiter.api.Order(1)
    void reserve() {
        given().contentType("application/json").body(createTestOrder(OrderStatus.NEW)).post("/reserve")
                .then()
                .statusCode(204);

        Customer c = repository.findById(1L);
        amount = c.getAmountAvailable();
        assertEquals(100, c.getAmountReserved());
    }

    @Test
    @org.junit.jupiter.api.Order(2)
    void confirm() {
        given().contentType("application/json").body(createTestOrder(OrderStatus.CONFIRMED)).post("/reserve")
                .then()
                .statusCode(204);

        Customer c = repository.findById(1L);
        assertEquals(0, c.getAmountReserved());
        assertEquals(amount, c.getAmountAvailable());
    }

    private Order createTestOrder(OrderStatus status) {
        Order o = new Order();
        o.setId(1L);
        o.setSource("test");
        o.setStatus(status);
        o.setAmount(100);
        o.setCustomerId(1L);
        return o;
    }
}

Also, let’s take a look at the logs of apps running on OpenShift. As you see, the order-service receives events from both stock-service and payment-service. After that, it confirms the order and sends a confirmation message to both services.

Here are the logs from the payment-service. As you see, it receives the CloudEvent generated by the order-service (the Ce-Source header equals to order).

Final Thoughts

With OpenShift Serverless and Knative Eventing, you can easily build event-driven architecture for simple HTTP-based apps. It is completely transparent for the app, which medium is used to store events and how they are routed. The only thing it needs to do is to prepare a request according to the CloudEvent specification. OpenShift Serverless brings several features to simplify development. We can also leverage Quarkus Kubernetes Extension to easily build and deploy our apps on OpenShift as Knative services.

The post Serverless on OpenShift with Knative, Quarkus and Kafka appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/04/18/serverless-on-openshift-with-knative-quarkus-and-kafka/feed/ 2 14100
Native Java with GraalVM and Virtual Threads on Kubernetes https://piotrminkowski.com/2023/01/04/native-java-with-graalvm-and-virtual-threads-on-kubernetes/ https://piotrminkowski.com/2023/01/04/native-java-with-graalvm-and-virtual-threads-on-kubernetes/#comments Wed, 04 Jan 2023 12:23:21 +0000 https://piotrminkowski.com/?p=13847 In this article, you will learn how to use virtual threads, build a native image with GraalVM and run such the Java app on Kubernetes. Currently, the native compilation (GraalVM) and virtual threads (Project Loom) are probably the hottest topics in the Java world. They improve the general performance of your app including memory usage […]

The post Native Java with GraalVM and Virtual Threads on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use virtual threads, build a native image with GraalVM and run such the Java app on Kubernetes. Currently, the native compilation (GraalVM) and virtual threads (Project Loom) are probably the hottest topics in the Java world. They improve the general performance of your app including memory usage and startup time. Since startup time and memory usage were always a problem for Java, expectations for native images or virtual threads were really big.

Of course, we usually consider such performance issues within the context of microservices or serverless apps. They should not consume many OS resources and should be easily auto-scalable. We can easily control resource usage on Kubernetes. If you are interested in Java virtual threads you can read my previous article about using them to create an HTTP server available here. For more details about Knative as serverless on Kubernetes, you can refer to the following article.

Introduction

Let’s start with the plan for our exercise today. In the first step, we will create a simple Java web app that uses virtual threads for processing incoming HTTP requests. Before we run the sample app we will install Knative on Kubernetes to quickly test autoscaling based on HTTP traffic. We will also install Prometheus on Kubernetes. This monitoring stack allows us to compare the performance of the app without/with GraalVM and virtual threads on Kubernetes. Then, we can proceed with the deployment. In order to easily build and run our native app on Kubernetes we will use Cloud Native Buildpacks. Finally, we will perform some load tests and compare metrics.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my instructions.

Create Java App with Virtual Threads

In the first step, we will create a simple Java app that acts as an HTTP server and handles incoming requests. In order to do that, we can use the HttpServer object from the core Java API. Once we create the server we can override a default thread executor with the setExecutor method. In the end, we will try to compare the app using standard threads with the same app using virtual threads. Therefore, we allow overriding the type of executor using an environment variable. The name of that is THREAD_TYPE. If you want to enable virtual threads you need to set the value virtual for that env. Here’s the main method of our app.

public class MainApp {

   public static void main(String[] args) throws IOException {
      HttpServer httpServer = HttpServer
         .create(new InetSocketAddress(8080), 0);

      httpServer.createContext("/example", 
         new SimpleCPUConsumeHandler());

      if (System.getenv("THREAD_TYPE").equals("virtual")) {
         httpServer.setExecutor(
            Executors.newVirtualThreadPerTaskExecutor());
      } else {
         httpServer.setExecutor(Executors.newFixedThreadPool(200));
      }
      httpServer.start();
   }

}

In order to process incoming requests, the HTTP server uses the handler that implements the HttpHandler interface. In our case, the handler is implemented inside the SimpleCPUConsumeHandler class as shown below. It consumes a lot of CPU since it creates an instance of BigInteger with the constructor that performs a lot of computations under the hood. It will also consume some time, so we have the simulation of processing time in the same step. As a response, we just return the next number in the sequence with the Hello_ prefix.

public class SimpleCPUConsumeHandler implements HttpHandler {

   Logger LOG = Logger.getLogger("handler");
   AtomicLong i = new AtomicLong();
   final Integer cpus = Runtime.getRuntime().availableProcessors();

   @Override
   public void handle(HttpExchange exchange) throws IOException {
      new BigInteger(1000, 3, new Random());
      String response = "Hello_" + i.incrementAndGet();
      LOG.log(Level.INFO, "(CPU->{0}) {1}", 
         new Object[] {cpus, response});
      exchange.sendResponseHeaders(200, response.length());
      OutputStream os = exchange.getResponseBody();
      os.write(response.getBytes());
      os.close();
   }
}

In order to use virtual threads in Java 19 we need to enable preview mode during compilation. With Maven we need to enable preview features using maven-compiler-plugin as shown below.

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.10.1</version>
  <configuration>
    <release>19</release>
    <compilerArgs>
      --enable-preview
    </compilerArgs>
  </configuration>
</plugin>

Install Knative on Kubernetes

This and the next step are not required to run the native application on Kubernetes. We will use Knative to easily autoscale the app in reaction to the volume of incoming traffic. In the next section, I’ll describe how to run a monitoring stack on Kubernetes.

The simplest way to install Knative on Kubernetes is with the kubectl command. We just need the Knative Serving component without any additional features. The Knative CLI (kn) is not required. We will deploy the application from the YAML manifest using Skaffold.

First, let’s install the required custom resources with the following command:

$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-crds.yaml

Then, we can Install the core components of Knative Serving by running the command:

$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-core.yaml

In order to access Knative services outside of the Kubernetes cluster we also need to install a networking layer. By default, Knative uses Kourier as an ingress. We can install the Kourier controller by running the following command.

$ kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.8.1/kourier.yaml

Finally, let’s configure Knative Serving to use Kourier with the following command:

kubectl patch configmap/config-network \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

If you don’t have an external domain configured or you are running Knative on the local cluster you need to configure DNS. Otherwise, you would have to run curl commands with a host header. Knative provides a Kubernetes Job that sets sslip.io as the default DNS suffix.

$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-default-domain.yaml

The generated URL contains the name of the service, the namespace, and the address of your Kubernetes cluster. Since I’m running my service on the local Kubernetes cluster in the demo-sless namespace my service is available under the following address:

But before we deploy the sample app on Knative, let’s do some other things.

Install Prometheus Stack on Kubernetes

As I mentioned before, we can also install a monitoring stack on Kubernetes.

The simplest way to install it is with the kube-prometheus-stack Helm chart. The package contains Prometheus and Grafana. It also includes all required rules and dashboards to visualize the basic metrics of your Kubernetes cluster. Firstly, let’s add the Helm repository containing our chart:

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Then we can install the kube-prometheus-stack Helm chart in the prometheus namespace with the following command:

$ helm install prometheus-stack prometheus-community/kube-prometheus-stack  \
    -n prometheus \
    --create-namespace

If everything goes fine, you should see a similar list of Kubernetes services:

$ kubectl get svc -n prometheus
NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                       ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   11s
prometheus-operated                         ClusterIP   None             <none>        9090/TCP                     10s
prometheus-stack-grafana                    ClusterIP   10.96.218.142    <none>        80/TCP                       23s
prometheus-stack-kube-prom-alertmanager     ClusterIP   10.105.10.183    <none>        9093/TCP                     23s
prometheus-stack-kube-prom-operator         ClusterIP   10.98.190.230    <none>        443/TCP                      23s
prometheus-stack-kube-prom-prometheus       ClusterIP   10.111.158.146   <none>        9090/TCP                     23s
prometheus-stack-kube-state-metrics         ClusterIP   10.100.111.196   <none>        8080/TCP                     23s
prometheus-stack-prometheus-node-exporter   ClusterIP   10.102.39.238    <none>        9100/TCP                     23s

We will analyze Grafana dashboards with memory and CPU statistics. We can enable port-forward to access it locally on the defined port, for example 9080:

$ kubectl port-forward svc/prometheus-stack-grafana 9080:80 -n prometheus

The default username for Grafana is admin and password prom-operator.

We will create two panels in the custom Grafana dashboard. First of them will show the memory usage per single pod in the demo-sless namespace.

sum(container_memory_working_set_bytes{namespace="demo-sless"} / (1024 * 1024)) by (pod)

The second of them will show the average CPU usage per single pod in the demo-sless namespace. You can import both of these directly to Grafana from the k8s/grafana-dasboards.json file from the GitHub repo.

rate(container_cpu_usage_seconds_total{namespace="demo-sless"}[3m])

Build and Deploy a native Java Application

We have already created the sample app and then configured the Kubernetes environment. Now, we may proceed to the deployment phase. Our goal here is to simplify the process of building a native image and running it on Kubernetes as much as possible. Therefore, we will use Cloud Native Buildpacks and Skaffold. With Buildpacks we don’t need to have anything installed on our laptop besides Docker. Skaffold can be easily integrated with Buildpacks to automate the whole process of building and running the app on Kubernetes. You just need to install the skaffold CLI on your machine.

For building a native image of a Java application we may use Paketo Buildpacks. It provides a dedicated buildpack for GraalVM called Paketo GraalVM Buildpack. We should include it in the configuration using the paketo-buildpacks/graalvm name. Since Skaffold supports Buildpacks, we should set all the properties inside the skaffold.yaml file. We need to override some default settings with environment variables. First of all, we have to set the version of Java to 19 and enable preview features (virtual threads). The Kubernetes deployment manifest is available under the k8s/deployment.yaml path.

apiVersion: skaffold/v2beta29
kind: Config
metadata:
  name: sample-java-concurrency
build:
  artifacts:
  - image: piomin/sample-java-concurrency
    buildpacks:
      builder: paketobuildpacks/builder:base
      buildpacks:
        - paketo-buildpacks/graalvm
        - paketo-buildpacks/java-native-image
      env:
        - BP_NATIVE_IMAGE=true
        - BP_JVM_VERSION=19
        - BP_NATIVE_IMAGE_BUILD_ARGUMENTS=--enable-preview
  local:
    push: true
deploy:
  kubectl:
    manifests:
    - k8s/deployment.yaml

Knative simplifies not only autoscaling, but also Kubernetes manifests. Here’s the manifest for our sample app available in the k8s/deployment.yaml file. We need to define a single object Service containing details of the application container. We will change the autoscaling target from the default 200 concurrent requests to 80. It means that if a single instance of the app will process more than 80 requests simultaneously Knative will create a new instance of the app (or a pod – to be more precise). In order to enable virtual threads for our app we also need to set the environment variable THREAD_TYPE to virtual.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-java-concurrency
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "80"
    spec:
      containers:
        - name: sample-java-concurrency
          image: piomin/sample-java-concurrency
          ports:
            - containerPort: 8080
          env:
            - name: THREAD_TYPE
              value: virtual
            - name: JAVA_TOOL_OPTIONS
              value: --enable-preview

Assuming you already installed Skaffold, the only thing you need to do is to run the following command:

$ skaffold run -n demo-sless

Or you can just deploy a ready image from my registry on Docker Hub. However, in that case, you need to change the image tag in the deployment.yaml manifest to virtual-native.

Once you deploy the app you can verify a list of Knative Service. The name of our target service is sample-java-concurrency. The address of the service is returned in the URL field.

$ kn service list -n demo-sless

Load Testing

We will run three testing scenarios today. In the first of them, we will test a standard compilation and a standard thread pool of 100 size. In the second of them, we will test a standard compilation with virtual threads. The final test will check native compilation in conjunction with virtual threads. In all these scenarios, we will set the same autoscaling target – 80 concurrent requests. I’m using the k6 tool for load tests. Each test scenario consists of 4 same steps. Each step takes 2 minutes. In the first step, we are simulating 50 users.

$ k6 run -u 50 -d 120s k6-test.js

Then, we are simulating 100 users.

$ k6 run -u 100 -d 120s k6-test.js

Finally, we run the test for 200 users twice. So, in total, there are four tests with 50, 100, 200, and 200 users, which takes 8 minutes.

$ k6 run -u 200 -d 120s k6-test.js

Let’s verify the results. By the way, here is our test for the k6 tool in javascript.

import http from 'k6/http';
import { check } from 'k6';

export default function () {
  const res = http.get(`http://sample-java-concurrency.demo-sless.127.0.0.1.sslip.io/example`);
  check(res, {
    'is status 200': (res) => res.status === 200,
    'body size is > 0': (r) => r.body.length > 0,
  });
}

Test for Standard Compilation and Threads

The diagram visible below shows memory usage at each phase of the test scenario. After simulating 200 users Knative scales up the number of instances. Theoretically, it should do that during 100 users test. But Knative measures incoming traffic at the level of the sidecar container inside the pod. The memory usage for the first instance is around ~900MB (it includes also sidecar container usage).

graalvm-virtual-threads-kubernetes-memory

Here’s a similar view as before but for the CPU usage. The highest consumption was before autoscaling occurs at the level of ~1.2 core. Then, depending on the number of instances ranges from ~0.4 core to ~0.7 core. As I mentioned before, we are using a time-consuming BigInteger constructor to simulate CPU usage under a heavy load.

graalvm-virtual-threads-kubernetes-cpu

Here are the test results for 50 users. The application was able to process ~105k requests in 2 minutes. The highest processing time value was ~3 seconds.

graalvm-virtual-threads-kubernetes-load-test

Here are the test results for 100 users. The application was able to process ~130k requests in 2 minutes with an average response time of ~90ms.

graalvm-virtual-threads-kubernetes-heavy-load

Finally, we have results for 200 users test. The application was able to process ~135k requests in 2 minutes with an average response time of ~175ms. The failure threshold was at the level of 0.02%.

Test for Standard Compilation and Virtual Threads

The same as in the previous section, here’s the diagram that shows memory usage at each phase of the test scenario. After simulating 100 users Knative scales up the number of instances. Theoretically, it should run the third instance of the app for 200 users. The memory usage for the first instance is around ~850MB (it includes also sidecar container usage).

graalvm-virtual-threads-kubernetes-memory-2

Here’s a similar view as before but for the CPU usage. The highest consumption was before autoscaling occurs at ~1.1 core. Then, depending on the number of instances ranges from ~0.3 core to ~0.7 core.

Here are the test results for 50 users. The application was able to process ~105k requests in 2 minutes. The highest processing time value was ~2.2 seconds.

Here are the test results for 100 users. The application was able to process ~115k requests in 2 minutes with an average response time of ~100ms.

Finally, we have results for 200 users test. The application was able to process ~135k requests in 2 minutes with an average response time of ~180ms. The failure threshold was at the level of 0.02%.

Test for Native Compilation and Virtual Threads

The same as in the previous section, here’s the diagram that shows memory usage at each phase of the test scenario. After simulating 100 users Knative scales up the number of instances. Theoretically, it should run the third instance of the app for 200 users (the third pod visible on the diagram was in fact in the Terminating phase for some time). The memory usage for the first instance is around ~50MB.

graalvm-virtual-threads-kubernetes-native-memory

Here’s a similar view as before but for the CPU usage. The highest consumption was before autoscaling occurs at ~1.3 core. Then, depending on the number of instances ranges from ~0.3 core to ~0.9 core.

Here are the test results for 50 users. The application was able to process ~75k requests in 2 minutes. The highest processing time value was ~2 seconds.

Here are the test results for 100 users. The application was able to process ~85k requests in 2 minutes with an average response time of ~140ms

Finally, we have results for 200 users test. The application was able to process ~100k requests in 2 minutes with an average response time of ~240ms. Plus – there were no failures at the second 200 users attempt.

Summary

In this article, I tried to compare the behavior of the Java app for GraalVM native compilation with virtual threads on Kubernetes with a standard approach. There are several conclusions after running all described tests:

  • There are no significant differences between standard and virtual threads when comes to resource usage or request processing time. The resource usage is slightly lower for virtual threads. On the other hand, the processing time is slightly lower for standard threads. However, if our handler method would take more time, this proportion changes in favor of virtual threads.
  • Autoscaling works quite better for virtual threads. However, I’m not sure why 🙂 Anyway, the number of instances was scaled up for 100 users with a target at the level of 80 for virtual threads, while for standard thread no. Of course, virtual threads give us more flexibility when setting an autoscaling target. For standard threads, we have to choose a value lower than a thread pool size, while for virtual threads we can set any reasonable value.
  • Native compilation significantly reduces app memory usage. For the native app, it was ~50MB instead of ~900MB. On the other hand, the CPU consumption was slightly higher for the native app.
  • Native app process requests slower than a standard app. In all the tests it was 30% lower than the number of requests processed by a standard app.

The post Native Java with GraalVM and Virtual Threads on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/04/native-java-with-graalvm-and-virtual-threads-on-kubernetes/feed/ 7 13847
Serverless Java Functions on OpenShift https://piotrminkowski.com/2021/11/30/serverless-java-functions-on-openshift/ https://piotrminkowski.com/2021/11/30/serverless-java-functions-on-openshift/#respond Tue, 30 Nov 2021 13:52:46 +0000 https://piotrminkowski.com/?p=10262 In this article, you will learn how to create and deploy serverless, Knative based functions on OpenShift. We will use a single kn CLI command to build and run our applications on the cluster. How we can do that? With OpenShift Serverless Functions we may use the kn func plugin that allows us to work […]

The post Serverless Java Functions on OpenShift appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and deploy serverless, Knative based functions on OpenShift. We will use a single kn CLI command to build and run our applications on the cluster. How we can do that? With OpenShift Serverless Functions we may use the kn func plugin that allows us to work directly with the source code. It uses Cloud Native Buildpacks API to create container images. It supports several runtimes like Node.js, Python, or Golang. However, we will try Java runtimes based on the Quarkus or Spring Boot frameworks.

Prerequisites

You need to have two things to be able to run this exercise by yourself. Firstly, you need to run Docker or Podman on your local machine, because CNCF Buildpacks use it for running build. If you are not familiar with Cloud Native Buildpacks you can read my article about it. I tried to configure Podman according to this part of the documentation, but I was not succeded (on macOS). With Docker, it just works, so I avoided other tries with Podman.

On the other hand, you need to have a target OpenShift cluster with the serverless module installed. You can run it locally using Code Ready Containers (crc). But in my opinion, a better idea is to try the developer sandbox available online here. It contains all you need to start development – including OpenShift Serverless. It is by default available on the sandbox version of Openshift.

Finally, you need to install the oc client and kn CLI locally. Since we use the kn func plugin, we need to install the Knative CLI version provided by RedHat. The detailed installation instruction is available here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the serverless/functions directory. After that, you should just follow my instructions. Let’s begin.

Create OpenShift Serverless function with Quarkus

We can generate a sample application source code using a single kn command. We may choose between multiple runtimes and two templates. Currently, there are five runtimes available. By default it is node (for Node.js applications), but you can also set quarkus, springboot, typescript, go or python. Also, there are two templates available: http for simple REST-based applications and events for applications leveraging the Knative Eventing approach in communication. Let’s create our first application using the quarkus runtime and events template.

$ kn func create -l quarkus -t events caller-function

Now, go to the order-function directory and edit generated pom.xml file. Firstly, we will replace a version of Java into 11, and a version of Quarkus with the latest.

<properties>
  <compiler-plugin.version>3.8.1</compiler-plugin.version>
  <maven.compiler.parameters>true</maven.compiler.parameters>
  <maven.compiler.source>11</maven.compiler.source>
  <maven.compiler.target>11</maven.compiler.target>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  <quarkus-plugin.version>2.4.2.Final</quarkus-plugin.version>
  <quarkus.platform.artifact-id>quarkus-universe-bom</quarkus.platform.artifact-id>
  <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id>
  <quarkus.platform.version>2.4.2.Final</quarkus.platform.version>
  <surefire-plugin.version>3.0.0-M5</surefire-plugin.version>
</properties>

By default the kn func plugin includes some dependencies in the test scope and a single dependency with the Quarkus Funqy Knative extension. There is also Quarkus Smallrye Health to automatically generate liveness and readiness health checks.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-funqy-knative-events</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

By default, the kn func generates a simple function that takes CloudEvent as an input and sends the same event as an output. I will not change there much and just replace System.out with the Logger implementation in order to print logs.

public class Function {

   @Inject
   Logger logger;

   @Funq
   public CloudEvent<Output> function(CloudEvent<Input> input) {
      logger.infof("New event: %s", input);
      Output output = new Output(input.data().getMessage());
      return CloudEventBuilder.create().build(output);
   }

}

Assuming you have already logged in to your OpenShift cluster using the oc client you can proceed to the function deployment. In fact, you just need to go to your application directory and then run a single kn func command as shown below.

$ kn func deploy -i quay.io/pminkows/caller-function -v

Once you run the command visible above the local is starting on Docker. If it finishes successfully, we are proceeding to the deployment phase.

openshift-serverless-functions-build-and-deploy

In the application root directory, there is also an automatically generated configuration file func.yaml.

name: caller-function
namespace: ""
runtime: quarkus
image: quay.io/pminkows/caller-function
imageDigest: sha256:5d3ef16e1282bc5f6367dff96ab7bb15487199ac3939e262f116657a83706245
builder: quay.io/boson/faas-jvm-builder:v0.8.4
builders: {}
buildpacks: []
healthEndpoints: {}
volumes: []
envs: []
annotations: {}
options: {}
labels: []

Create OpenShift Serveless functions with Spring Boot

Now, we will do exactly the same thing as before, but this time for the Spring Boot application. In order to create a Spring Boot function we just need to set springboot as the runtime name. The name of our application is callme-function.

$ kn func create -l springboot -t events callme-function

Then we go to the callme-function directory. Firstly, let’s edit Maven pom.xml. The same as for Quarkus I’ll set the latest version of Spring Boot.

<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>2.6.1</version>
  <relativePath />
</parent>

The generated application is built on top of the Spring Cloud Function project. We don’t need to add there anything.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-function-web</artifactId>
</dependency>

Spring Boot function code is a little bit more complicated than generated by the Quarkus framework. It uses Spring functional programming style, where a Function bean represents HTTP POST endpoint with input and output response.

@SpringBootApplication
public class SpringCloudEventsApplication {

  private static final Logger LOGGER = Logger.getLogger(
      SpringCloudEventsApplication.class.getName());

  public static void main(String[] args) {
    SpringApplication.run(SpringCloudEventsApplication.class, args);
  }

  @Bean
  public Function<Message<Input>, Output> uppercase(CloudEventHeaderEnricher enricher) {
    return m -> {
      HttpHeaders httpHeaders = HeaderUtils.fromMessage(m.getHeaders());
      
      LOGGER.log(Level.INFO, "Input CE Id:{0}", httpHeaders.getFirst(
          ID));
      LOGGER.log(Level.INFO, "Input CE Spec Version:{0}",
          httpHeaders.getFirst(SPECVERSION));
      LOGGER.log(Level.INFO, "Input CE Source:{0}",
          httpHeaders.getFirst(SOURCE));
      LOGGER.log(Level.INFO, "Input CE Subject:{0}",
          httpHeaders.getFirst(SUBJECT));

      Input input = m.getPayload();
      LOGGER.log(Level.INFO, "Input {0} ", input);
      Output output = new Output();
      output.input = input.input;
      output.operation = httpHeaders.getFirst(SUBJECT);
      output.output = input.input != null ? input.input.toUpperCase() : "NO DATA";
      return output;
    };
  }

  @Bean
  public CloudEventHeaderEnricher attributesProvider() {
    return attributes -> attributes
        .setSpecVersion("1.0")
        .setId(UUID.randomUUID()
            .toString())
        .setSource("http://example.com/uppercase")
        .setType("com.redhat.faas.springboot.events");
  }

  @Bean
  public Function<String, String> health() {
    return probe -> {
      if ("readiness".equals(probe)) {
        return "ready";
      } else if ("liveness".equals(probe)) {
        return "live";
      } else {
        return "OK";
      }
    };
  }
}

Because there are two functions (@Bean Function) defined in the generated code, you need to add the following property in the application.properties file.

spring.cloud.function.definition = uppercase;health

Deploying serverless functions on OpenShift

We have two sample applications deployed on the OpenShift cluster. The first of them is written in Quarkus, while the second of them in Spring Boot. Those applications will communicate with each other through events. So in the first step, we need to create a Knative Eventing broker.

$ kn broker create default

Let’s check if the broker has been successfully created.

$ kn broker list
NAME      URL                                                                                  AGE   CONDITIONS   READY   REASON
default   http://broker-ingress.knative-eventing.svc.cluster.local/piomin-serverless/default   12m   5 OK / 5     True

Then, let’s display a list of running Knative services:

$ kn service list
NAME              URL                                                                                       LATEST                  AGE   CONDITIONS   READY   REASON
caller-function   https://caller-function-piomin-serverless.apps.cluster-8e1d.8e1d.sandbox114.opentlc.com   caller-function-00002   18m   3 OK / 3     True    
callme-function   https://callme-function-piomin-serverless.apps.cluster-8e1d.8e1d.sandbox114.opentlc.com   callme-function-00006   11h   3 OK / 3     True 

Send and receive CloudEvent with Quarkus

The architecture of our solution is visible in the picture below. The caller-function receives events sent directly by us using the kn func plugin. Then it processes the input event, creates a new CloudEvent and sends it to the Knative Broker. The broker just receives events. To provide more advanced event routing we need to define the Knative Trigger. The trigger is able to filter events and then send them directly to the target sink. On the other hand, those events are received by the callme-function.

openshift-serverless-functions-arch

Ok, so now we need to rewrite caller-function to add a step of creating and sending CloudEvent to the Knative Broker. To do we need to declare and invoke the Quarkus REST client. Firstly, let’s add the required dependencies.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-rest-client</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-rest-client-jackson</artifactId>
</dependency>

In the next step, we will create a client interface with a declaration of a calling method. The CloudEvent specification requires four HTTP headers to be set on the POST request. The context path of a Knative Broker is /piomin-serverless/default, and we have already checked it out using the kn broker list command.

@Path("/piomin-serverless/default")
@RegisterRestClient
public interface BrokerClient {

   @POST
   @Produces(MediaType.APPLICATION_JSON)
   String sendEvent(Output event,
                    @HeaderParam("Ce-Id") String id,
                    @HeaderParam("Ce-Source") String source,
                    @HeaderParam("Ce-Type") String type,
                    @HeaderParam("Ce-Specversion") String version);
}

We also need to set the broker address in the application.properties. We use a standard property mp-rest/url handled by the MicroProfile REST client.

functions.BrokerClient/mp-rest/url = http://broker-ingress.knative-eventing.svc.cluster.local 

Here’s the final implementation of our function in the caller-function module. The type of event is caller.output. In fact, we can set any name as an event type.

public class Function {

   @Inject
   Logger logger;
   @Inject
   @RestClient
   BrokerClient client;

   @Funq
   public CloudEvent<Output> function(CloudEvent<Input> input) {
      logger.infof("New event: %s", input);
      Output output = new Output(input.data().getMessage());
      CloudEvent<Output> outputCloudEvent = CloudEventBuilder.create().build(output);
      client.sendEvent(output,
                input.id(),
                "http://caller-function",
                "caller.output",
                input.specVersion());
      return outputCloudEvent;
    }

}

Finally, we will create a Knative Trigger. It gets events incoming to the broker and filters out by type. Only events with the type caller.output are forwarded to the callme-function.

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: callme-trigger
spec:
  broker: default
  filter:
    attributes:
      type: caller.output
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: callme-function
    uri: /uppercase

Now, we can send a test CloudEvent to the caller-function directly from the local machine with the following command (ensure you are calling it in the caller-function directory):

$ kn func emit -d "Hello"

The Output class in the caller-function and Input class in the callme-function should have the same fields. By default kn func generates different fields for Quarkus and Spring Boot example applications. So you also need to refactor one of these objects. I changed the field in the callme-function Input class.

Summary

Let’s summarize our exercise. We built and deployed two applications (Quarkus and Spring Boot) directly from the source code into the target cluster. Thanks to the OpenShift Serverless Functions we didn’t have to provide any additional configuration to do that to deploy them as Knative services.

If there is no incoming traffic, Knative services are automatically scaled down to zero after 60 seconds (by default).

So, to generate some traffic we may use the kn func emit command. It sends a CloudEvent message directly to the target application. In our case, it is caller-function (Quarkus). After receiving an input event the pod with the caller-function starts. After startup, it sends a CloudEvent message to the Knative Broker. Finally, the event goes to the callme-service (Spring Boot), which is also starting as shown below.

openshift-serverless-functions-pods

As you see OpenShift provides several simplifications when working with Knative. And what’s important, now you can easily test them by yourself using the developer sandbox version of OpenShift available online.

The post Serverless Java Functions on OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/11/30/serverless-java-functions-on-openshift/feed/ 0 10262
Spring Boot on Knative https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/ https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/#respond Mon, 01 Mar 2021 11:28:54 +0000 https://piotrminkowski.com/?p=9506 In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database. Knative […]

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database.

Knative introduces a new way of managing your applications on Kubernetes. It extends Kubernetes to add some new key features. One of the most significant of them is a “Scale to zero”. If Knative detects that a service is not used, it scales down the number of running instances to zero. Consequently, it provides a built-in autoscaling feature based on a concurrency or a number of requests per second. We may also take advantage of revision tracking, which is responsible for switching from one version of your application to another. With Knative you just have to focus on your core logic.

All the features I described above are provided by the component called “Knative Serving”. There are also two other components: Eventing” and “Build”. The Build component is deprecated and has been replaced by Tekton. The Eventing component requires attention. However, I’ll discuss it in more detail in the separated article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

I used the same application as the example in some of my previous articles about Spring Boot and Kubernetes. I just wanted to focus that you don’t have to change anything in the source code to run it also on Knative. The only required change will be in the YAML manifest.

Since Knative provides built-in autoscaling you may want to compare it with the horizontal pod autoscaler (HPA) on Kubernetes. To do that you may read the article Spring Boot Autoscaling on Kubernetes. If you are interested in how to easily deploy applications on Kubernetes read the following article about the Okteto platform.

Install Knative on Kubernetes

Of course, before we start Spring Boot development we need to install Knative on Kubernetes. We can do it using the kubectl CLI or an operator. You can find the detailed installation instruction here. I decided to try it on OpenShift. It is obviously the fastest way. I could do it with one click using the OpenShift Serverless Operator. No matter which type of installation you choose, the further steps will apply everywhere.

Using Knative CLI

This step is optional. You can deploy and manage applications on Knative with CLI. To download CLI do to the site https://knative.dev/docs/install/install-kn/. Then you can deploy the application using the Docker image.

$ kn service create sample-spring-boot-on-kubernetes \
   --image piomin/sample-spring-boot-on-kubernetes:latest

We can also verify a list of running services with the following command.

$ kn service list

For more advanced deployments it will be more suitable to use the YAML manifest. We will start the build from the source code build with Skaffold and Jib. Firstly, let’s take a brief look at our Spring Boot application.

Spring Boot application for Knative

As I mentioned before, we are going to create a typical Spring Boot REST-based application that connects to a Mongo database. The database is deployed on Kubernetes. Our model class uses the person collection in MongoDB. Let’s take a look at it.

@Document(collection = "person")
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class Person {

   @Id
   private String id;
   private String firstName;
   private String lastName;
   private int age;
   private Gender gender;
}

We use Spring Data MongoDB to integrate our application with the database. In order to simplify this integration we take advantage of its “repositories” feature.

public interface PersonRepository extends CrudRepository<Person, String> {
   Set<Person> findByFirstNameAndLastName(String firstName, String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);
}

Our application exposes several REST endpoints for adding, searching and updating data. Here’s the controller class implementation.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;
   private PersonService service;

   PersonController(PersonRepository repository, PersonService service) {
      this.repository = repository;
      this.service = service;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
			@PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

We inject database connection settings and credentials using environment variables. Our application exposes endpoints for liveness and readiness health checks. The readiness endpoint verifies a connection with the Mongo database. Of course, we use the built-in feature from Spring Boot Actuator for that.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint.health:
      show-details: always
      group:
        readiness:
          include: mongo
      probes:
        enabled: true

Defining Knative Service in YAML

Firstly, we need to define a YAML manifest with a Knative service definition. It sets an autoscaling strategy using the Knative Pod Autoscaler (KPA). In order to do that we have to add annotation autoscaling.knative.dev/target with the number of simultaneous requests that can be processed by each instance of the application. By default, it is 100. We decrease that limit to 20 requests. Of course, we need to set liveness and readiness probes for the container. Also, we refer to the Secret and ConfigMap to inject MongoDB settings.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "20"
    spec:
      containers:
        - image: piomin/sample-spring-boot-on-kubernetes
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
          env:
            - name: MONGO_DATABASE
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password

Configure Skaffold and Jib for Knative deployment

We will use Skaffold to automate the deployment of our application on Knative. Skaffold is a command-line tool that allows running the application on Kubernetes using a single command. You may read more about it in the article Local Java Development on Kubernetes. It may be easily integrated with the Jib Maven plugin. We just need to set jib as a default option in the build section of the Skaffold configuration. We can also define a list of YAML scripts executed during the deploy phase. The skaffold.yaml file should be placed in the project root directory. Here’s a current Skaffold configuration. As you see, it runs the script with the Knative Service definition.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      jib:
        args:
          - -Pjib
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/knative-service.yaml

Skaffold activates the jib profile during the build. Within this profile, we will place a jib-maven-plugin. Jib is useful for building images in dockerless mode.

<profile>
   <id>jib</id>
   <activation>
      <activeByDefault>false</activeByDefault>
   </activation>
   <build>
      <plugins>
         <plugin>
            <groupId>com.google.cloud.tools</groupId>
            <artifactId>jib-maven-plugin</artifactId>
            <version>2.8.0</version>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, all we need to do is to run the following command. It builds our application, creates and pushes a Docker image, and run it on Knative using knative-service.yaml.

$ skaffold run

Verify Spring Boot deployment on Knative

Now, we can verify our deployment on Knative. To do that let’s execute the command kn service list as shown below. We have a single Knative Service with the name sample-spring-boot-on-kubernetes.

spring-boot-knative-services

Then, let’s imagine we deploy three versions (revisions) of our application. To do that let’s just provide some changes in the source and redeploy our service using skaffold run. It creates new revisions of our Knative Service. However, the whole traffic is forwarded to the latest revision (with -vlskg suffix).

spring-boot-knative-revisions

With Knative we can easily split traffic between multiple revisions of the single service. To do that we need to add a traffic section in the Knative Service YAML configuration. We define a percent of the whole traffic per a single revision as shown below.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
  spec:
    template:
      ...
    traffic:
      - latestRevision: true
        percent: 60
        revisionName: sample-spring-boot-on-kubernetes-vlskg
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-t9zrd
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-9xhbw

Let’s take a look at the graphical representation of our current architecture. 60% of traffic is forwarded to the latest revision, while both previous revisions receive 20% of traffic.

spring-boot-knative-openshift

Autoscaling and scale to zero

By default, Knative supports autoscaling. We may choose between two types of targets: concurrency and requests-per-second (RPS). The default target is concurrency. As you probably remember, I have overridden this default value to 20 with the autoscaling.knative.dev/target annotation. So, our goal now is to verify autoscaling. To do that we need to send many simultaneous requests to the application. Of course, the incoming traffic is distributed across three different revisions of Knative Service.

Fortunately, we may easily simulate a large traffic with the siege tool. We will call the GET /persons endpoint that returns all available persons. We are sending 150 concurrent requests with the command visible below.

$ siege http://sample-spring-boot-on-kubernetes-pminkows-serverless.apps.cluster-7260.7260.sandbox1734.opentlc.com/persons \
   -i -v -r 1000  -c 150 --no-parser

Under the hood, Knative still creates a Deployment and scales down or scales up the number of running pods. So, if you have three revisions of a single Service, there are three different deployments created. Finally, I have 10 running pods for the latest deployment that receives 60% of traffic. There are also 3 and 2 running pods for the previous revisions.

What will happen if there is no traffic coming to the service? Knative will scale down the number of running pods for all the deployments to zero.

Conclusion

In this article, you learned how to deploy the Spring Boot application as a Knative service using Skaffold and Jib. I explained with the examples how to create a new revision of the Service, and distribute traffic across those revisions. We also test the scenario with autoscaling based on concurrent requests and scale to zero in case of no incoming traffic. You may expect more articles about Knative soon! Not only with Spring Boot 🙂

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/feed/ 0 9506
Serverless on AWS with DynamoDB, SNS and CloudWatch https://piotrminkowski.com/2017/07/03/serverless-on-aws-with-dynamodb-sns-and-cloudwatch/ https://piotrminkowski.com/2017/07/03/serverless-on-aws-with-dynamodb-sns-and-cloudwatch/#respond Mon, 03 Jul 2017 21:20:03 +0000 https://piotrminkowski.wordpress.com/?p=4399 In one of my previous posts Serverless on AWS Lambda I presented an example of creating REST API based on AWS Lambda functions. However, we should keep in mind that this mechanism is also used to exchange events between services (SaaS) provided by AWS. Now I will show such an example of using object database […]

The post Serverless on AWS with DynamoDB, SNS and CloudWatch appeared first on Piotr's TechBlog.

]]>
In one of my previous posts Serverless on AWS Lambda I presented an example of creating REST API based on AWS Lambda functions. However, we should keep in mind that this mechanism is also used to exchange events between services (SaaS) provided by AWS. Now I will show such an example of using object database like DynamoDB, sending messages with Simple Notification Service (SNS) and monitoring logs with CloudWatch.

Let’s begin from our sample application. For our test purposes I designed simple system which grants some bonuses basing on incoming orders. First, we are invoking service which put order record into DynamoDB table. Basing on insert event which triggers Lambda function we are processing this event and perform transaction on customer account which id is stored in another DynamoDB table. Afterwards we are sending message to the topic with order information. This topic is created using Amazon SNS service and there are three Lambda functions listening for incoming messages. Each of them grants a bonus that recharges customer account basing on different input data. System architecture is visualized on the figure below. Sample application source code is available on GitHub.

aws

Every AWS Lambda function needs to implement RequestHandler interface. For more details about basic rules, deployment process and usable tools go to my first article about that subject Serverless on AWS Lambda. Coming back to our sample below you can see implementation of first lambda function PostOrder. It does nothing more saving incoming Order object in DynamoDB table. For storing data in DynamoDB we can use ORM mechanism available inside AWS Java libraries. How to use basic DynamoDB annotations you can also read in my first article about serverless.

[code language=”java”]
public class PostOrder implements RequestHandler<Order, Order> {

private DynamoDBMapper mapper;

public PostOrder() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
mapper = new DynamoDBMapper(client);
}

@Override
public Order handleRequest(Order o, Context ctx) {
LambdaLogger logger = ctx.getLogger();
mapper.save(o);
Order r = o;
logger.log("Order saved: " + r.getId());
return r;
}

}
[/code]

Assuming we have our first function implemented and deployed on AWS we should configure API gateway which expose it outside. To achieve it go to Lambda Management Console on AWS, select PostOrder function. Then go to Triggers section and select API Gateway as a trigger for calling your function.

lambda-1

Unfortunately it’s not all we need to have our API gateway redirecting requests to Lambda function. Go to API Gateway section and select OrderService. We should remove ANY method and configure POST invoking our function as you see on the picture below.

lambda-3

Then you should see diagram visible below where all the steps of calling lambda function from API Gateway are visualized.

lambda-4

What’s worth doing is to create a model object in Model section. For Order class it should look like in the picture below, which is with compatible JSON schema notation. After creating model definition set it as a request body inside Method Request panel and response body inside Method Response panel.

lambda-5

Finally, deploy the resource using Deploy API action and try to call it on your which can be checked in Stages section.

lambda-6

Let’s see the second implementation of lambda function – ProcessOrderFunction. It is triggered by insert event received from DynamoDB order table. This function is responsible for reading data from incoming event, then create and send message to the target topic. DynamodbEvent stores data as a map, where the key is column name in order table. To get value from map we have to pass data type, for example string is collected using getS method and integer using getN method. The message send to SNS topic is serialized to JSON string with Jackson library.

[code language=”java”]
public class ProcessOrder implements RequestHandler<DynamodbEvent, String> {

private AmazonSNSClient client;
private ObjectMapper jsonMapper;

public ProcessOrder() {
client = new AmazonSNSClient();
jsonMapper = new ObjectMapper();
}

public String handleRequest(DynamodbEvent event, Context ctx) {
LambdaLogger logger = ctx.getLogger();
final List<DynamodbStreamRecord> records = event.getRecords();

for (DynamodbStreamRecord record : records) {
try {
logger.log(String.format("DynamoEvent: %s, %s", record.getEventName(), record.getDynamodb().getNewImage().values().toString()));
Map<String, AttributeValue> m = record.getDynamodb().getNewImage();
Order order = new Order(m.get("id").getS(), m.get("accountId").getS(), Integer.parseInt(m.get("amount").getN()));
String msg = jsonMapper.writeValueAsString(order);
logger.log(String.format("SNS message: %s", msg));
PublishRequest req = new PublishRequest("arn:aws:sns:us-east-1:658226682183:order", jsonMapper.writeValueAsString(new OrderMessage(msg)), "Order");
req.setMessageStructure("json");
PublishResult res = client.publish(req);
logger.log(String.format("SNS message sent: %s", res.getMessageId()));
} catch (JsonProcessingException e) {
logger.log(e.getMessage());
}
}

return "OK";
}
}
[/code]

Same as for PostOrder function we also should add trigger for ProcessOrder – but this time the trigger is DynamoDB table.

lambda-10

In the Simple Notification Service section create order topic. Amazon SNS client uses ARN address for identifying the right topic. As you can see on the picture below there is also topic for DynamoDB which was created with database trigger.

lambda-11

The last implementation step in the sample is to create lambda functions which are listening on SNS topic for incoming order messages. Here’s OrderAmountHandler function code. The logic is simple. After message receive it needs to perform deserialization from JSON, then check order amount and modify balance value in account table using accountId from order object.

[code language=”java”]
public class OrderAmountHandler implements RequestHandler<SNSEvent, Object> {

private final static int AMOUNT_THRESHOLD = 1500;
private final static int AMOUNT_BONUS_PERCENTAGE = 10;

private DynamoDBMapper mapper;
private ObjectMapper jsonMapper;

public OrderAmountHandler() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
mapper = new DynamoDBMapper(client);
jsonMapper = new ObjectMapper();
}

@Override
public Object handleRequest(SNSEvent event, Context context) {
final LambdaLogger logger = context.getLogger();
final List<SNSRecord> records = event.getRecords();

for (SNSRecord record : records) {
logger.log(String.format("SNSEvent: %s, %s", record.getSNS().getMessageId(), record.getSNS().getMessage()));
try {
Order o = jsonMapper.readValue(record.getSNS().getMessage(), Order.class);
if (o.getAmount() >= AMOUNT_THRESHOLD) {
logger.log(String.format("Order allowed: id=%s, amount=%d", o.getId(), o.getAmount()));
Account a = mapper.load(Account.class, o.getId());
a.setBalance(a.getBalance() + o.getAmount() * AMOUNT_BONUS_PERCENTAGE);
mapper.save(a);
logger.log(String.format("Account balande update: id=%s, amount=%d", a.getId(), a.getBalance()));
}
} catch (IOException e) {
logger.log(e.getMessage());
}
}

return "OK";
}

}
[/code]

After creating and deploying all our three functions we have to subscribe them into the order topic.

lambda-7

All logs from your lambda functions can be inspected with CloudWatch service.

lambda-8

lambda-9

Don’t forget about permissions in My Security Credentials section. For my example I had to attach the following policies to my default execution role: AmazonAPIGatewayInvokeFullAccess, AmazonDynamoDBFullAccess, AWSLambdaDynamoDBExecutionRole and AmazonSNSFullAccess.

The post Serverless on AWS with DynamoDB, SNS and CloudWatch appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/07/03/serverless-on-aws-with-dynamodb-sns-and-cloudwatch/feed/ 0 4399
Serverless on AWS Lambda https://piotrminkowski.com/2017/06/23/serverless-on-aws-lambda/ https://piotrminkowski.com/2017/06/23/serverless-on-aws-lambda/#respond Fri, 23 Jun 2017 12:53:48 +0000 https://piotrminkowski.wordpress.com/?p=3953 Preface Serverless is now one of the hottest trend in IT world. A more accurate name for it is Function as a Service (FaaS). Have any of you ever tried to share your APIs deployed in the cloud? Before serverless, I had to create a virtual machine with Linux on the cloud provider’s infrastructure, and […]

The post Serverless on AWS Lambda appeared first on Piotr's TechBlog.

]]>
Preface

Serverless is now one of the hottest trend in IT world. A more accurate name for it is Function as a Service (FaaS). Have any of you ever tried to share your APIs deployed in the cloud? Before serverless, I had to create a virtual machine with Linux on the cloud provider’s infrastructure, and then deploy and run that application implemented in, for example, nodejs or Java. With serverless, you do not have to write any commands in Linux.

What serverless is different from another very popular topic – Microservices? To illustrate the difference serverless is often referred to as nano services. For example, if we would like to create a microservice that provides API for CRUD operations on a database table, then our APIs had several endpoints for searching (GET/{id}), updating (PUT), removing (DELETE), inserting (POST) and maybe a few more for searching using different input criteria. According to serverless architecture, all of those endpoints would be independent functions created and deployed separately. While microservice can be built on an on-premise architecture, for example with Spring Boot, serverless is closely related to the cloud infrastructure.

Custom function implementation based on the cloud provider’s tools is really quick and easy. I’ll try to show it on sample functions deployed on AWS Amazon using AWS Lambda. Sample application source code for AWS serverless is available on GitHub.

How AWS serverless works

Here’s the serverless AWS Lambda solution description from the Amazon site.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

aws-serverless-lambda

AWS Lambda is a computing platform for many application scenarios. It supports applications written in Node.js, Java, C#, and Python. On the platform there are also some services available like DynamoDB – NoSQL database, Kinesis – streaming service, CloudWatch – provides monitoring and logs, Redshift – data warehouse solution, S3 – cloud storage and API Gateway. Every event coming to those services can trigger the calling of your Lambda function. You can also interact with those services using AWS Lambda SDK.

AWS serverless preparation

Let’s finish with the theory, all of us the most like concretes 🙂 First of all, we need to set up AWS account. AWS has web management console available here, but there is also command line client called AWS CLI, which can be downloaded here. There are also some other tools through which we can share our functions on AWS. I will tell you about them later. To be able to use them, including the command-line client, we need to generate an access key. Go to web console and select My Security Credentials on your profile, then select Continue to Security Credentials and expand Access Keys. Create your new access key and save it on the disc. There are to fields Access Key ID and Secret Access Key. If you would like to use AWS CLI first type aws configure and then you should provide those keys, default region, and format (for example JSON or text).

You can use AWS CLI or even a web console to deploy your Lambda Function on the cloud. However, I will present you with others (in my opinion better :)) solutions. If you are using Eclipse for your development the best option is to download the AWS Toolkit plugin. Now, I’m able to upload my function to AWS Lambda or even create or modify a table on Amazon DynamoDB. After downloading Eclipse plugin you need to provide Access Key ID and Secret Access Key. You have AWS Management perspective available, where you can see all AWS staff including lambda function, DynamoDB tables, identity management, or other services like S3, SNS, or SQS. You can create a special AWS Java Project or work with a standard maven project. Just display project menu by clicking right button on the project and then select Amazon Web Services and Upload function to AWS Lambda

aws-serverless-deploy-1

After selecting Upload function to AWS Lambda… you should window visible below. You can choose the region for your deployment (us-east-1 by default), IAM role, and what is most important – the name of your lambda function. We can create a new function or update the existing one.

Another interesting possibility for uploading function into AWS Lambda serverless is a Maven plugin. With lambda-maven-plugin we can define security credentials and all definitions of our functions in JSON format. Here’s plugin declaration in pom.xml. The plugin can be invoked during maven project build mvn clean install lambda:deploy-lambda. Dependencies should be attached to the output JAR file – that’s why maven-shade-plugin is used during the build.

<plugin>
<groupId>com.github.seanroy</groupId>
<artifactId>lambda-maven-plugin</artifactId>
<version>2.2.1</version>
<configuration>
<accessKey>${aws.accessKey}</accessKey>
<secretKey>${aws.secretKey}</secretKey>
<functionCode>${project.build.directory}/${project.build.finalName}.jar</functionCode>
<version>${project.version}</version>
<lambdaRoleArn>arn:aws:iam::436521214155:role/lambda_basic_execution</lambdaRoleArn>
<s3Bucket>lambda-function-bucket-us-east-1-1498055423860</s3Bucket>
<publish>true</publish>
<forceUpdate>true</forceUpdate>
<lambdaFunctionsJSON>
[
{
"functionName": "PostAccountFunction",
"description": "POST account",
"handler": "pl.piomin.services.aws.account.add.PostAccount",
"timeout": 30,
"memorySize": 256,
"keepAlive": 10
},
{
"functionName": "GetAccountFunction",
"description": "GET account",
"handler": "pl.piomin.services.aws.account.find.GetAccount",
"timeout": 30,
"memorySize": 256,
"keepAlive": 30
},
{
"functionName": "GetAccountsByCustomerIdFunction",
"description": "GET accountsCustomerId",
"handler": "pl.piomin.services.aws.account.find.GetAccountsByCustomerId",
"timeout": 30,
"memorySize": 256,
"keepAlive": 30
}
]
</lambdaFunctionsJSON>
</configuration>
</plugin>

AWS Lambda functions implementation

I implemented sample AWS Lambda functions in Java. Here’s list of dependencies inside pom.xml.

<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-log4j</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.152</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-lambda</artifactId>
<version>1.11.152</version>
</dependency>
</dependencies>

Every function is connected to Amazon DynamoDB. There are two tables created for that sample: account and customer. One customer could have more than one account and this assignment is realized through the customerId field in the account table. AWS library for DynamoDB has ORM mapping mechanisms. Here’s Account entity definition. By using annotations we can declare table name, hash key, index and table attributes.

@DynamoDBTable(tableName = "account")
public class Account implements Serializable {

private static final long serialVersionUID = 8331074361667921244L;
private String id;
private String number;
private String customerId;

public Account() {

}

public Account(String id, String number, String customerId) {
this.id = id;
this.number = number;
this.customerId = customerId;
}

@DynamoDBHashKey(attributeName = "id")
@DynamoDBAutoGeneratedKey
public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}

@DynamoDBAttribute(attributeName = "number")
public String getNumber() {
return number;
}

public void setNumber(String number) {
this.number = number;
}

@DynamoDBIndexHashKey(attributeName = "customerId", globalSecondaryIndexName = "Customer-Index")
public String getCustomerId() {
return customerId;
}

public void setCustomerId(String customerId) {
this.customerId = customerId;
}

}

In the described sample application there are five lambda functions:
PostAccountFunction – it receives Account object from request and insert it into the table
GetAccountFunction – find account by hash key id attribute
GetAccountsByCustomerId – find list of accounts by input customerId
PostCustomerFunction – it receives Customer object from request and insert it into the table
GetCustomerFunction – find customer by hash key id attribute

Every AWS Lambda function handler needs to implement RequestHandler interface with one method handleRequest. Here’s a PostAccount handler class. It connects to DynamoDB using Amazon client and creates an ORM mapper DynamoDBMapper, which saves input entity in database.

public class PostAccount implements RequestHandler<Account, Account> {

private DynamoDBMapper mapper;

public PostAccount() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_EAST_1));
mapper = new DynamoDBMapper(client);
}

@Override
public Account handleRequest(Account a, Context ctx) {
LambdaLogger logger = ctx.getLogger();
mapper.save(a);
Account r = a;
logger.log("Account: " + r.getId());
return r;
}

}

GetCustomer function not only interacts with DynamoDB, but also invokes GetAccountsByCustomerId function. Maybe this may not be the best example of the need to call another function, because it could directly retrieve data from the account table directly. But I wanted to separate the data layer from the function logic and jut show how invoking of another function works in AWS Lambda cloud.


public class GetCustomer implements RequestHandler<Customer, Customer> {

private DynamoDBMapper mapper;
private AccountService accountService;

public GetCustomer() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_EAST_1));
mapper = new DynamoDBMapper(client);

accountService = LambdaInvokerFactory.builder() .lambdaClient(AWSLambdaClientBuilder.defaultClient())
.build(AccountService.class);
}

@Override
public Customer handleRequest(Customer customer, Context ctx) {
LambdaLogger logger = ctx.getLogger();
logger.log("Account: " + customer.getId());
customer = mapper.load(Customer.class, customer.getId());
List<Account> aa = accountService.getAccountsByCustomerId(new Account(customer.getId()));
customer.setAccounts(aa);
return customer;
}
}

AccountService is an interface. It uses @LambdaFunction annotation to declare the name of invoked function in the cloud.


public interface AccountService {
@LambdaFunction(functionName = "GetAccountsByCustomerIdFunction")
List<Account> getAccountsByCustomerId(Account account);
}

API Configuration

I assume that you have already uploaded your Lambda functions. Now, you can go to AWS Web Console and see the full list of them in the AWS Lambda section. Every function can be tested by selecting an item in the functions list and calling Test function action.

lambda-3

If you didn’t configure role permissions you probably got an error while trying to call your lambda function. I attached AmazonDynamoDBFullAccess policy to the main lambda_basic_execution role for Amazon DynamoDB connection. Then I created a new inline policy to enable invoking GetAccountsByCustomerIdFunction from another lambda function as you can see in the figure below. If you retry your tests now everything works fine.

aws-serverless-function

Well, now we are able to test our functions from AWS Lambda Web Test Console. But our main goal is to invoke them from outside clients, for example a REST client. Fortunately, there is a component called API Gateway which can be configured to proxy our HTTP requests from gateway to Lambda functions. Here’s figure with our API configuration, for example POST /customer is mapped to PostCustomerFunction, GET /customer/{id} is mapped to GetCustomerFunction etc.

lambda-5

You can configure model definitions and set them as input or output types for API.

{
"title": "Account",
"type": "object",
"properties": {
"id": {
"type": "string"
},
"number": {
"type": "string"
},
"customerId": {
"type": "string"
}
}
}

For GET request configuration is a little more complicated. We have to set mapping from path parameter into JSON object which is an input in Lambda functions. Select Integration Request element and then go to Body Mapping Templates section.

lambda-6

Our API can also be exported as a Swagger JSON definition. If you are not familiar with take a look at my previous article Microservices API Documentation with Swagger2.

aws-lambda-7

Final words on AWS serverless

In my article, I described the next steps illustrating how to create an API based on the AWS Lambda serverless solution. I showed the obvious advantages of this solution, such as no need for self-management of servers, the ability to easily deploy applications in the cloud, configuration, and monitoring fully based on the solutions provided by the AWS Web Console. You can easily extend my sample with some other services, for example with Kinesis to enable data stream processing. In my opinion, AWS serverless is the perfect solution for exposing simple APIs in the cloud.

The post Serverless on AWS Lambda appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/06/23/serverless-on-aws-lambda/feed/ 0 3953