FaaS Archives - Piotr's TechBlog https://piotrminkowski.com/tag/faas/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 30 Nov 2021 13:55:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 FaaS Archives - Piotr's TechBlog https://piotrminkowski.com/tag/faas/ 32 32 181738725 Serverless Java Functions on OpenShift https://piotrminkowski.com/2021/11/30/serverless-java-functions-on-openshift/ https://piotrminkowski.com/2021/11/30/serverless-java-functions-on-openshift/#respond Tue, 30 Nov 2021 13:52:46 +0000 https://piotrminkowski.com/?p=10262 In this article, you will learn how to create and deploy serverless, Knative based functions on OpenShift. We will use a single kn CLI command to build and run our applications on the cluster. How we can do that? With OpenShift Serverless Functions we may use the kn func plugin that allows us to work […]

The post Serverless Java Functions on OpenShift appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and deploy serverless, Knative based functions on OpenShift. We will use a single kn CLI command to build and run our applications on the cluster. How we can do that? With OpenShift Serverless Functions we may use the kn func plugin that allows us to work directly with the source code. It uses Cloud Native Buildpacks API to create container images. It supports several runtimes like Node.js, Python, or Golang. However, we will try Java runtimes based on the Quarkus or Spring Boot frameworks.

Prerequisites

You need to have two things to be able to run this exercise by yourself. Firstly, you need to run Docker or Podman on your local machine, because CNCF Buildpacks use it for running build. If you are not familiar with Cloud Native Buildpacks you can read my article about it. I tried to configure Podman according to this part of the documentation, but I was not succeded (on macOS). With Docker, it just works, so I avoided other tries with Podman.

On the other hand, you need to have a target OpenShift cluster with the serverless module installed. You can run it locally using Code Ready Containers (crc). But in my opinion, a better idea is to try the developer sandbox available online here. It contains all you need to start development – including OpenShift Serverless. It is by default available on the sandbox version of Openshift.

Finally, you need to install the oc client and kn CLI locally. Since we use the kn func plugin, we need to install the Knative CLI version provided by RedHat. The detailed installation instruction is available here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the serverless/functions directory. After that, you should just follow my instructions. Let’s begin.

Create OpenShift Serverless function with Quarkus

We can generate a sample application source code using a single kn command. We may choose between multiple runtimes and two templates. Currently, there are five runtimes available. By default it is node (for Node.js applications), but you can also set quarkus, springboot, typescript, go or python. Also, there are two templates available: http for simple REST-based applications and events for applications leveraging the Knative Eventing approach in communication. Let’s create our first application using the quarkus runtime and events template.

$ kn func create -l quarkus -t events caller-function

Now, go to the order-function directory and edit generated pom.xml file. Firstly, we will replace a version of Java into 11, and a version of Quarkus with the latest.

<properties>
  <compiler-plugin.version>3.8.1</compiler-plugin.version>
  <maven.compiler.parameters>true</maven.compiler.parameters>
  <maven.compiler.source>11</maven.compiler.source>
  <maven.compiler.target>11</maven.compiler.target>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  <quarkus-plugin.version>2.4.2.Final</quarkus-plugin.version>
  <quarkus.platform.artifact-id>quarkus-universe-bom</quarkus.platform.artifact-id>
  <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id>
  <quarkus.platform.version>2.4.2.Final</quarkus.platform.version>
  <surefire-plugin.version>3.0.0-M5</surefire-plugin.version>
</properties>

By default the kn func plugin includes some dependencies in the test scope and a single dependency with the Quarkus Funqy Knative extension. There is also Quarkus Smallrye Health to automatically generate liveness and readiness health checks.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-funqy-knative-events</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

By default, the kn func generates a simple function that takes CloudEvent as an input and sends the same event as an output. I will not change there much and just replace System.out with the Logger implementation in order to print logs.

public class Function {

   @Inject
   Logger logger;

   @Funq
   public CloudEvent<Output> function(CloudEvent<Input> input) {
      logger.infof("New event: %s", input);
      Output output = new Output(input.data().getMessage());
      return CloudEventBuilder.create().build(output);
   }

}

Assuming you have already logged in to your OpenShift cluster using the oc client you can proceed to the function deployment. In fact, you just need to go to your application directory and then run a single kn func command as shown below.

$ kn func deploy -i quay.io/pminkows/caller-function -v

Once you run the command visible above the local is starting on Docker. If it finishes successfully, we are proceeding to the deployment phase.

openshift-serverless-functions-build-and-deploy

In the application root directory, there is also an automatically generated configuration file func.yaml.

name: caller-function
namespace: ""
runtime: quarkus
image: quay.io/pminkows/caller-function
imageDigest: sha256:5d3ef16e1282bc5f6367dff96ab7bb15487199ac3939e262f116657a83706245
builder: quay.io/boson/faas-jvm-builder:v0.8.4
builders: {}
buildpacks: []
healthEndpoints: {}
volumes: []
envs: []
annotations: {}
options: {}
labels: []

Create OpenShift Serveless functions with Spring Boot

Now, we will do exactly the same thing as before, but this time for the Spring Boot application. In order to create a Spring Boot function we just need to set springboot as the runtime name. The name of our application is callme-function.

$ kn func create -l springboot -t events callme-function

Then we go to the callme-function directory. Firstly, let’s edit Maven pom.xml. The same as for Quarkus I’ll set the latest version of Spring Boot.

<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>2.6.1</version>
  <relativePath />
</parent>

The generated application is built on top of the Spring Cloud Function project. We don’t need to add there anything.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-function-web</artifactId>
</dependency>

Spring Boot function code is a little bit more complicated than generated by the Quarkus framework. It uses Spring functional programming style, where a Function bean represents HTTP POST endpoint with input and output response.

@SpringBootApplication
public class SpringCloudEventsApplication {

  private static final Logger LOGGER = Logger.getLogger(
      SpringCloudEventsApplication.class.getName());

  public static void main(String[] args) {
    SpringApplication.run(SpringCloudEventsApplication.class, args);
  }

  @Bean
  public Function<Message<Input>, Output> uppercase(CloudEventHeaderEnricher enricher) {
    return m -> {
      HttpHeaders httpHeaders = HeaderUtils.fromMessage(m.getHeaders());
      
      LOGGER.log(Level.INFO, "Input CE Id:{0}", httpHeaders.getFirst(
          ID));
      LOGGER.log(Level.INFO, "Input CE Spec Version:{0}",
          httpHeaders.getFirst(SPECVERSION));
      LOGGER.log(Level.INFO, "Input CE Source:{0}",
          httpHeaders.getFirst(SOURCE));
      LOGGER.log(Level.INFO, "Input CE Subject:{0}",
          httpHeaders.getFirst(SUBJECT));

      Input input = m.getPayload();
      LOGGER.log(Level.INFO, "Input {0} ", input);
      Output output = new Output();
      output.input = input.input;
      output.operation = httpHeaders.getFirst(SUBJECT);
      output.output = input.input != null ? input.input.toUpperCase() : "NO DATA";
      return output;
    };
  }

  @Bean
  public CloudEventHeaderEnricher attributesProvider() {
    return attributes -> attributes
        .setSpecVersion("1.0")
        .setId(UUID.randomUUID()
            .toString())
        .setSource("http://example.com/uppercase")
        .setType("com.redhat.faas.springboot.events");
  }

  @Bean
  public Function<String, String> health() {
    return probe -> {
      if ("readiness".equals(probe)) {
        return "ready";
      } else if ("liveness".equals(probe)) {
        return "live";
      } else {
        return "OK";
      }
    };
  }
}

Because there are two functions (@Bean Function) defined in the generated code, you need to add the following property in the application.properties file.

spring.cloud.function.definition = uppercase;health

Deploying serverless functions on OpenShift

We have two sample applications deployed on the OpenShift cluster. The first of them is written in Quarkus, while the second of them in Spring Boot. Those applications will communicate with each other through events. So in the first step, we need to create a Knative Eventing broker.

$ kn broker create default

Let’s check if the broker has been successfully created.

$ kn broker list
NAME      URL                                                                                  AGE   CONDITIONS   READY   REASON
default   http://broker-ingress.knative-eventing.svc.cluster.local/piomin-serverless/default   12m   5 OK / 5     True

Then, let’s display a list of running Knative services:

$ kn service list
NAME              URL                                                                                       LATEST                  AGE   CONDITIONS   READY   REASON
caller-function   https://caller-function-piomin-serverless.apps.cluster-8e1d.8e1d.sandbox114.opentlc.com   caller-function-00002   18m   3 OK / 3     True    
callme-function   https://callme-function-piomin-serverless.apps.cluster-8e1d.8e1d.sandbox114.opentlc.com   callme-function-00006   11h   3 OK / 3     True 

Send and receive CloudEvent with Quarkus

The architecture of our solution is visible in the picture below. The caller-function receives events sent directly by us using the kn func plugin. Then it processes the input event, creates a new CloudEvent and sends it to the Knative Broker. The broker just receives events. To provide more advanced event routing we need to define the Knative Trigger. The trigger is able to filter events and then send them directly to the target sink. On the other hand, those events are received by the callme-function.

openshift-serverless-functions-arch

Ok, so now we need to rewrite caller-function to add a step of creating and sending CloudEvent to the Knative Broker. To do we need to declare and invoke the Quarkus REST client. Firstly, let’s add the required dependencies.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-rest-client</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-rest-client-jackson</artifactId>
</dependency>

In the next step, we will create a client interface with a declaration of a calling method. The CloudEvent specification requires four HTTP headers to be set on the POST request. The context path of a Knative Broker is /piomin-serverless/default, and we have already checked it out using the kn broker list command.

@Path("/piomin-serverless/default")
@RegisterRestClient
public interface BrokerClient {

   @POST
   @Produces(MediaType.APPLICATION_JSON)
   String sendEvent(Output event,
                    @HeaderParam("Ce-Id") String id,
                    @HeaderParam("Ce-Source") String source,
                    @HeaderParam("Ce-Type") String type,
                    @HeaderParam("Ce-Specversion") String version);
}

We also need to set the broker address in the application.properties. We use a standard property mp-rest/url handled by the MicroProfile REST client.

functions.BrokerClient/mp-rest/url = http://broker-ingress.knative-eventing.svc.cluster.local 

Here’s the final implementation of our function in the caller-function module. The type of event is caller.output. In fact, we can set any name as an event type.

public class Function {

   @Inject
   Logger logger;
   @Inject
   @RestClient
   BrokerClient client;

   @Funq
   public CloudEvent<Output> function(CloudEvent<Input> input) {
      logger.infof("New event: %s", input);
      Output output = new Output(input.data().getMessage());
      CloudEvent<Output> outputCloudEvent = CloudEventBuilder.create().build(output);
      client.sendEvent(output,
                input.id(),
                "http://caller-function",
                "caller.output",
                input.specVersion());
      return outputCloudEvent;
    }

}

Finally, we will create a Knative Trigger. It gets events incoming to the broker and filters out by type. Only events with the type caller.output are forwarded to the callme-function.

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: callme-trigger
spec:
  broker: default
  filter:
    attributes:
      type: caller.output
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: callme-function
    uri: /uppercase

Now, we can send a test CloudEvent to the caller-function directly from the local machine with the following command (ensure you are calling it in the caller-function directory):

$ kn func emit -d "Hello"

The Output class in the caller-function and Input class in the callme-function should have the same fields. By default kn func generates different fields for Quarkus and Spring Boot example applications. So you also need to refactor one of these objects. I changed the field in the callme-function Input class.

Summary

Let’s summarize our exercise. We built and deployed two applications (Quarkus and Spring Boot) directly from the source code into the target cluster. Thanks to the OpenShift Serverless Functions we didn’t have to provide any additional configuration to do that to deploy them as Knative services.

If there is no incoming traffic, Knative services are automatically scaled down to zero after 60 seconds (by default).

So, to generate some traffic we may use the kn func emit command. It sends a CloudEvent message directly to the target application. In our case, it is caller-function (Quarkus). After receiving an input event the pod with the caller-function starts. After startup, it sends a CloudEvent message to the Knative Broker. Finally, the event goes to the callme-service (Spring Boot), which is also starting as shown below.

openshift-serverless-functions-pods

As you see OpenShift provides several simplifications when working with Knative. And what’s important, now you can easily test them by yourself using the developer sandbox version of OpenShift available online.

The post Serverless Java Functions on OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/11/30/serverless-java-functions-on-openshift/feed/ 0 10262
Serverless on AWS Lambda https://piotrminkowski.com/2017/06/23/serverless-on-aws-lambda/ https://piotrminkowski.com/2017/06/23/serverless-on-aws-lambda/#respond Fri, 23 Jun 2017 12:53:48 +0000 https://piotrminkowski.wordpress.com/?p=3953 Preface Serverless is now one of the hottest trend in IT world. A more accurate name for it is Function as a Service (FaaS). Have any of you ever tried to share your APIs deployed in the cloud? Before serverless, I had to create a virtual machine with Linux on the cloud provider’s infrastructure, and […]

The post Serverless on AWS Lambda appeared first on Piotr's TechBlog.

]]>
Preface

Serverless is now one of the hottest trend in IT world. A more accurate name for it is Function as a Service (FaaS). Have any of you ever tried to share your APIs deployed in the cloud? Before serverless, I had to create a virtual machine with Linux on the cloud provider’s infrastructure, and then deploy and run that application implemented in, for example, nodejs or Java. With serverless, you do not have to write any commands in Linux.

What serverless is different from another very popular topic – Microservices? To illustrate the difference serverless is often referred to as nano services. For example, if we would like to create a microservice that provides API for CRUD operations on a database table, then our APIs had several endpoints for searching (GET/{id}), updating (PUT), removing (DELETE), inserting (POST) and maybe a few more for searching using different input criteria. According to serverless architecture, all of those endpoints would be independent functions created and deployed separately. While microservice can be built on an on-premise architecture, for example with Spring Boot, serverless is closely related to the cloud infrastructure.

Custom function implementation based on the cloud provider’s tools is really quick and easy. I’ll try to show it on sample functions deployed on AWS Amazon using AWS Lambda. Sample application source code for AWS serverless is available on GitHub.

How AWS serverless works

Here’s the serverless AWS Lambda solution description from the Amazon site.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

aws-serverless-lambda

AWS Lambda is a computing platform for many application scenarios. It supports applications written in Node.js, Java, C#, and Python. On the platform there are also some services available like DynamoDB – NoSQL database, Kinesis – streaming service, CloudWatch – provides monitoring and logs, Redshift – data warehouse solution, S3 – cloud storage and API Gateway. Every event coming to those services can trigger the calling of your Lambda function. You can also interact with those services using AWS Lambda SDK.

AWS serverless preparation

Let’s finish with the theory, all of us the most like concretes 🙂 First of all, we need to set up AWS account. AWS has web management console available here, but there is also command line client called AWS CLI, which can be downloaded here. There are also some other tools through which we can share our functions on AWS. I will tell you about them later. To be able to use them, including the command-line client, we need to generate an access key. Go to web console and select My Security Credentials on your profile, then select Continue to Security Credentials and expand Access Keys. Create your new access key and save it on the disc. There are to fields Access Key ID and Secret Access Key. If you would like to use AWS CLI first type aws configure and then you should provide those keys, default region, and format (for example JSON or text).

You can use AWS CLI or even a web console to deploy your Lambda Function on the cloud. However, I will present you with others (in my opinion better :)) solutions. If you are using Eclipse for your development the best option is to download the AWS Toolkit plugin. Now, I’m able to upload my function to AWS Lambda or even create or modify a table on Amazon DynamoDB. After downloading Eclipse plugin you need to provide Access Key ID and Secret Access Key. You have AWS Management perspective available, where you can see all AWS staff including lambda function, DynamoDB tables, identity management, or other services like S3, SNS, or SQS. You can create a special AWS Java Project or work with a standard maven project. Just display project menu by clicking right button on the project and then select Amazon Web Services and Upload function to AWS Lambda

aws-serverless-deploy-1

After selecting Upload function to AWS Lambda… you should window visible below. You can choose the region for your deployment (us-east-1 by default), IAM role, and what is most important – the name of your lambda function. We can create a new function or update the existing one.

Another interesting possibility for uploading function into AWS Lambda serverless is a Maven plugin. With lambda-maven-plugin we can define security credentials and all definitions of our functions in JSON format. Here’s plugin declaration in pom.xml. The plugin can be invoked during maven project build mvn clean install lambda:deploy-lambda. Dependencies should be attached to the output JAR file – that’s why maven-shade-plugin is used during the build.

<plugin>
<groupId>com.github.seanroy</groupId>
<artifactId>lambda-maven-plugin</artifactId>
<version>2.2.1</version>
<configuration>
<accessKey>${aws.accessKey}</accessKey>
<secretKey>${aws.secretKey}</secretKey>
<functionCode>${project.build.directory}/${project.build.finalName}.jar</functionCode>
<version>${project.version}</version>
<lambdaRoleArn>arn:aws:iam::436521214155:role/lambda_basic_execution</lambdaRoleArn>
<s3Bucket>lambda-function-bucket-us-east-1-1498055423860</s3Bucket>
<publish>true</publish>
<forceUpdate>true</forceUpdate>
<lambdaFunctionsJSON>
[
{
"functionName": "PostAccountFunction",
"description": "POST account",
"handler": "pl.piomin.services.aws.account.add.PostAccount",
"timeout": 30,
"memorySize": 256,
"keepAlive": 10
},
{
"functionName": "GetAccountFunction",
"description": "GET account",
"handler": "pl.piomin.services.aws.account.find.GetAccount",
"timeout": 30,
"memorySize": 256,
"keepAlive": 30
},
{
"functionName": "GetAccountsByCustomerIdFunction",
"description": "GET accountsCustomerId",
"handler": "pl.piomin.services.aws.account.find.GetAccountsByCustomerId",
"timeout": 30,
"memorySize": 256,
"keepAlive": 30
}
]
</lambdaFunctionsJSON>
</configuration>
</plugin>

AWS Lambda functions implementation

I implemented sample AWS Lambda functions in Java. Here’s list of dependencies inside pom.xml.

<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-log4j</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.152</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-lambda</artifactId>
<version>1.11.152</version>
</dependency>
</dependencies>

Every function is connected to Amazon DynamoDB. There are two tables created for that sample: account and customer. One customer could have more than one account and this assignment is realized through the customerId field in the account table. AWS library for DynamoDB has ORM mapping mechanisms. Here’s Account entity definition. By using annotations we can declare table name, hash key, index and table attributes.

@DynamoDBTable(tableName = "account")
public class Account implements Serializable {

private static final long serialVersionUID = 8331074361667921244L;
private String id;
private String number;
private String customerId;

public Account() {

}

public Account(String id, String number, String customerId) {
this.id = id;
this.number = number;
this.customerId = customerId;
}

@DynamoDBHashKey(attributeName = "id")
@DynamoDBAutoGeneratedKey
public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}

@DynamoDBAttribute(attributeName = "number")
public String getNumber() {
return number;
}

public void setNumber(String number) {
this.number = number;
}

@DynamoDBIndexHashKey(attributeName = "customerId", globalSecondaryIndexName = "Customer-Index")
public String getCustomerId() {
return customerId;
}

public void setCustomerId(String customerId) {
this.customerId = customerId;
}

}

In the described sample application there are five lambda functions:
PostAccountFunction – it receives Account object from request and insert it into the table
GetAccountFunction – find account by hash key id attribute
GetAccountsByCustomerId – find list of accounts by input customerId
PostCustomerFunction – it receives Customer object from request and insert it into the table
GetCustomerFunction – find customer by hash key id attribute

Every AWS Lambda function handler needs to implement RequestHandler interface with one method handleRequest. Here’s a PostAccount handler class. It connects to DynamoDB using Amazon client and creates an ORM mapper DynamoDBMapper, which saves input entity in database.

public class PostAccount implements RequestHandler<Account, Account> {

private DynamoDBMapper mapper;

public PostAccount() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_EAST_1));
mapper = new DynamoDBMapper(client);
}

@Override
public Account handleRequest(Account a, Context ctx) {
LambdaLogger logger = ctx.getLogger();
mapper.save(a);
Account r = a;
logger.log("Account: " + r.getId());
return r;
}

}

GetCustomer function not only interacts with DynamoDB, but also invokes GetAccountsByCustomerId function. Maybe this may not be the best example of the need to call another function, because it could directly retrieve data from the account table directly. But I wanted to separate the data layer from the function logic and jut show how invoking of another function works in AWS Lambda cloud.


public class GetCustomer implements RequestHandler<Customer, Customer> {

private DynamoDBMapper mapper;
private AccountService accountService;

public GetCustomer() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_EAST_1));
mapper = new DynamoDBMapper(client);

accountService = LambdaInvokerFactory.builder() .lambdaClient(AWSLambdaClientBuilder.defaultClient())
.build(AccountService.class);
}

@Override
public Customer handleRequest(Customer customer, Context ctx) {
LambdaLogger logger = ctx.getLogger();
logger.log("Account: " + customer.getId());
customer = mapper.load(Customer.class, customer.getId());
List<Account> aa = accountService.getAccountsByCustomerId(new Account(customer.getId()));
customer.setAccounts(aa);
return customer;
}
}

AccountService is an interface. It uses @LambdaFunction annotation to declare the name of invoked function in the cloud.


public interface AccountService {
@LambdaFunction(functionName = "GetAccountsByCustomerIdFunction")
List<Account> getAccountsByCustomerId(Account account);
}

API Configuration

I assume that you have already uploaded your Lambda functions. Now, you can go to AWS Web Console and see the full list of them in the AWS Lambda section. Every function can be tested by selecting an item in the functions list and calling Test function action.

lambda-3

If you didn’t configure role permissions you probably got an error while trying to call your lambda function. I attached AmazonDynamoDBFullAccess policy to the main lambda_basic_execution role for Amazon DynamoDB connection. Then I created a new inline policy to enable invoking GetAccountsByCustomerIdFunction from another lambda function as you can see in the figure below. If you retry your tests now everything works fine.

aws-serverless-function

Well, now we are able to test our functions from AWS Lambda Web Test Console. But our main goal is to invoke them from outside clients, for example a REST client. Fortunately, there is a component called API Gateway which can be configured to proxy our HTTP requests from gateway to Lambda functions. Here’s figure with our API configuration, for example POST /customer is mapped to PostCustomerFunction, GET /customer/{id} is mapped to GetCustomerFunction etc.

lambda-5

You can configure model definitions and set them as input or output types for API.

{
"title": "Account",
"type": "object",
"properties": {
"id": {
"type": "string"
},
"number": {
"type": "string"
},
"customerId": {
"type": "string"
}
}
}

For GET request configuration is a little more complicated. We have to set mapping from path parameter into JSON object which is an input in Lambda functions. Select Integration Request element and then go to Body Mapping Templates section.

lambda-6

Our API can also be exported as a Swagger JSON definition. If you are not familiar with take a look at my previous article Microservices API Documentation with Swagger2.

aws-lambda-7

Final words on AWS serverless

In my article, I described the next steps illustrating how to create an API based on the AWS Lambda serverless solution. I showed the obvious advantages of this solution, such as no need for self-management of servers, the ability to easily deploy applications in the cloud, configuration, and monitoring fully based on the solutions provided by the AWS Web Console. You can easily extend my sample with some other services, for example with Kinesis to enable data stream processing. In my opinion, AWS serverless is the perfect solution for exposing simple APIs in the cloud.

The post Serverless on AWS Lambda appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/06/23/serverless-on-aws-lambda/feed/ 0 3953