Wrap a REST Service in Lambda

Posted on
Pajthy to AWS

At a first glance it might sound controversial that I want to wrap a REST server - a typically long-running process - into a serverless function. I’m sure this trick won’t work with any services; in our case luckily there are some aspects of the pajthy backend that make this possible:

  • the web server is included in the executable (for an excluded example think about standalon tomcat instance with a .war package)
  • minimal ramp-up time before able to serve an incoming request (again, just compare your minimal java webapp startup time with a golang app with the same functionality).

Also there is an extra surprise in go’s standard library that comes handy, more on that later, I don’t want to spoil the fun! 😉

This item is the fourth one of a series called Pajthy to AWS where I try to capture the process of migrating one of my open-source pet projects into a serverless setup in AWS.

Where are we going to

In this part we’ll focus on two things:

  1. set up an HTTP API in the API Gateway to receive incoming requests
  2. wrap the existing backend service into a Lambda function
graph LR r53[Route 53] --> apigw[API gateway] subgraph "Focus on this part" apigw Lambda end apigw --> Lambda Lambda --> stuff[other dependencies]

I won’t go into details about how to create a custom domain for the API endpoint in Route 53 - it’s a pretty straightforward configuration, I don’t want to bore anybody with that. Will not discuss the upstream services the app depends on either: those will have their own posts soon.

Right now all I want to achieve is to have a 201 Created response when I try to create a new session; other operations require a persistent storage since, well, the server itself is not permanent anymore (pathy is far from your typical 12 factor app setup).

Amazon API gateway

With a serverless setup we are in need for a service that acts as a listener for our incoming http request; in AWS, we have the API gateway. Basically with this service we can design the API: we can define and link routes (eg. POST /{id}/hello) to integrations (like the lambda function we’ll have just in a moment). Needless to say API Gateway can do so much more; there are two nifty features that we can exploit right now.

First, because the routing is already configured in the service I don’t want to specify them again; having multiple sources of truth never helped anybody. Instead I can define a single route that will match everything: ANY /{proxy+}. Amazon calls this a greedy path variable, you can read more about it in the docs.

Another convenient feature is the built-in CORS configuration; we don’t really need this at the moment since the router in the wrapped function can deal with the OPTIONS requests as well; however by setting this up we can spare unnecessary lambda executions since the API Gateway will return for OPTIONS requests before calling lambda 🧠💰.

AWS Lambda

Serverless functions are basically short-living containers that usually are already pre-built and just missing the function code itself. Amazon named their solution Lambda.

Lambda functions can be written in several languages (including golang, yay!). A Lambda function’s input and output is a standard JSON object. Nothing more there is to know about this for our cause.

Connecting the dots

So what kind of messages will be sent between the API Gateway and the function? AWS has a quite extensive document explaining the payload format it uses to translate incoming HTTP requests into, along with the format it requires for the responses. Here is the code I used for trying this out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
package main

import (
    "context"
    "fmt"

    "github.com/aws/aws-lambda-go/lambda"
)

type Request struct {
    Headers        map[string]string `json:"headers"`
    QueryParams    map[string]string `json:"queryStringParameters"`
    Body           string            `json:"body"`
    RequestContext struct {
        Http struct {
            Method string `json:"method"`
            Path   string `json:"path"`
        } `json:"http"`
    } `json:"requestContext"`
}

type Response struct {
    StatusCode int               `json:"statusCode"`
    Headers    map[string]string `json:"headers"`
    Body       string            `json:"body"`
}

func HandleLambda(ctx context.Context, req Request) (Response, error) {
    combined := map[string]string{}
    for k, v := range req.Headers {
        combined[k] = v
    }
    for k, v := range req.QueryParams {
        combined[k] = v
    }
    return Response{
        StatusCode: 200,
        Headers:    combined,
        Body:       fmt.Sprintf("%v %q", req.RequestContext.Http, req.Body),
    }, nil
}

func main() {
    lambda.Start(HandleLambda)
}

Easy, right? Let’s go through it section by section

  • Line 10-20: a subset of the request fields, the ones we need for our backend:
    • method and path for the request (remember, there is a greedy path setup in the API gateway)
    • all the request headers
    • the query parameters from the URL, if any
    • and the body of the request
  • Line 22-26: the response JSON, also just the required subset of the necessary fields:
    • the status code for the response
    • headers we set
    • response body
  • Line 28-41: the handler function itself

The response comes together from the parts of the attributes of the request. It sets the statusCode to 200; if you use a status code that by design does not suppose to have a payload, it won’t get sent in the response (learned this in the hard way). The response headers will consist of all the request headers and query parameters. Finally, the body will show the HTTP method and path among with the request body itself.

To deploy it I used the following commands (don’t forget to set the GOOS environment variable to linux beforehand):

go build -o main main.go
zip func.zip main
aws lambda update-function-code --function-name MyFunc --zip-file func.zip

Wrapping an existing service

As I stated at the beginning in the series, I consider the existing core code to be a black box; I don’t want to change anything there just to have it easier to migrate that would prevent me to run the core code outside of AWS. No vendor lock-in for me, thank you sir!

graph LR Lambda -->|1: extract| REST REST -->|2: request| Handler Handler -->|3: doStuff| Handler Handler -->|4: response| REST REST -->|5: wrap| Lambda

As it turns out there is a perfect solution (the extra surprise I mentioned before) in the go standard library that we can use for wrapping an existing handler without it starting an actual server at all: the httptest package that’s already extensively used in the tests for the backend!

40
41
42
43
44
45
46
47
48
49
50
51
req := httptest.NewRequest(
    in.RequestContext.Http.Method,
    in.RequestContext.Http.Path,
    strings.NewReader(in.Body))
for k, v := range in.Headers {
    req.Header.Add(k, v)
}
q := req.URL.Query()
for k, v := range in.QueryParams {
    q.Add(k, v)
}
req.URL.RawQuery = q.Encode()

The same Request data structure is used as it was discussed above; here the request object gets constructed, method, path, headers and query params. All things we have and need for the handler to work.

53
54
55
56
rr := httptest.NewRecorder()

h := handler.New(store.NewInMemory(), event.New())
h.ServeHTTP(rr, req)

The magic itself happens here

  • in Line 53 the recorder gets created, this will capture the response
  • then in Line 55 the pajthy handler get’s initialized; it uses in-memory dependencies for now.
  • Firing the request is happening in Line 56.
58
59
60
61
62
63
64
65
66
67
headers := map[string]string{}
for k, vv := range rr.HeaderMap {
    headers[k] = vv[0]
}

return Response{
    StatusCode: rr.Code,
    Headers:    headers,
    Body:       rr.Body.String(),
}, nil

Finally after the handler (a.k.a. the black box) did it’s job, the HTTP response gets translated into a Lambda response; again, the Response struct is the same as before.

You can find the whole code here on github.

Trying it out

After setting this up it all worked like a charm, I am able to create voting sessions for days now.

From time to time there are increased execution times (nothing above 70-80ms though): it’s because the container executing the lambda function is not getting shut down immediately; if another request comes in fast after there is a good chance it will get served by the already running container. So the “increased” execution times are actually the normal ones; when the service execution takes just avg. 20ms, it’s actually because of the already initiated service.

In the next one I move the storage into a more persistent solution, after that it will be possible to actually use the sessions after they get created.