Moving an SPA to S3

Posted on
Pajthy to AWS

This time will walk you through how I moved the pajthy frontend - a single page site written in react and packed on top of an NGINX server in a docker container - into AWS.

SPA stands for Single Page Application; it’s a setup when all the assets that are in the application’s domain are loaded with the first request. It has the same purpose as why some games have loading screens: it will reduce the time of content loading during in-app.

The pajthy frontend is an SPA, just a collection of static assets. Also since CORS headers are set up properly in an “allow all for everybody” way it does not matter where the site is hosted; if it can be downloaded it can be used as well.

Saying that, the plan was to set up a new domain for pajthy (up until now it was using a subdomain for akarasz.me), have some of the signature AWS service, S3 (short for Simple Storage Service 💥), upload the artifacts and that’s it, we have lift-off!

This item is the third one on a series called Pajthy to AWS where I try to capture the process of migrating one of my open-source pet projects into a serverless setup in AWS.

Reserving the domain

Back before I decided to do this migration bit I was looking into the topic “How much will this cost for me?” and the verdict was that while it’s only slightly cheaper than the current setup (remember, the whole pajthy is getting served from a $5 virtual machine), with potential demand for the service usage to ramp up, it should get much cheaper with that. Doing the cost calculation showed an unexpected fee: AWS charging $0.5 per hosted zone (that’s the one collecting all the records for a single domain). I haven’t met another provider before charging me explicitly for DNS hosting 🤷 so I found this weird; Luckily half a buck a month is not that big of an expense at this part of the globe neither, so this was not a show stopper.

Registering a new domain with AWS is just like your typical next-next-finish install so I won’t go into details in here - the important part is that after cca. two minutes I was the happy owner of the pajthy.com domain 🎉.

In retrospective I should’ve looked up other domain registrars for registering the domain - it really doesn’t matter who’s your original hosting, since all registrars let you change the NS records for your domain; by setting those you can point to the actual server where you set up your records.

Donald Duck counting money
Every penny counts!

For example in my case I should’ve registered the domain at GoDaddy.com - they have a pretty good deal on domains for the first year (for a .com it’s $2 instead of the standard $10-$12), and I could’ve just redefined it there to point to AWS. Next time I’ll make sure I won’t forget about this step - after all that $10 difference would’ve payed for more than one and a half years of hosting the managed zone.

Serving the static assets

Setting up S3 to serve your static context is easy, this is the primary purpose of the service after all. There is an extra step if you want to serve your content via https, but that part is simple as well.

There are countless tutorials and howtos on the net, you can use the official one. Or if you want to do what I did:

npm run build
aws s3 cp build/ s3://pajthy.com/ --recursive

Only FYI is to pay attention to the name of the bucket: use the custom domain exactly (in my case: pajthy.com), because Route 53 won’t offer it as an alias when setting the domain.

graph LR r53[Route 53] --- CloudFront CloudFront --- acm[Cerificate Manager] CloudFront --- S3

All settings for https routing can be done from CloudFront - just create a new distribution, set it to serve content from your S3 bucket and click on the Request a Certificate with ACM button - AWS takes care of the rest.

With that, the content is getting served, it uses the same backend as before, functionality is the same. However when I tried to open any session via a link I got an error…

Routing in the SPA

The pajthy frontend uses BrowserRouter from react-router, parsing URL resources that are normally translated by the web server - and our AWS setup is not aware yet that this should be translated by the frontend instead.

With NGINX I had the following config:

 6
 7
 8
 9
10
    location / {
        root      /usr/share/nginx/html;
        index     index.html;
        try_files $uri /index.html;
    }

The try_files setting made sure that all resources hitting this location were redirected to the /index.html, where the frontend was ready to process the request.

I found that there are two general ways I could go from here

  • switch to another router in the frontend, like HashRouter. With that all URLs would always point to the root resource (/), so it wouldn’t hit CloudFront with request it is not able to server - no need to set up anything
  • look up a solution similar to the one in NGINX in CloudFront or S3 - react-router is a widely used package and AWS is a widespread cloud provider, I’m sure somebody else already faced this issue.

I choose the second option - since user’s usual first experience with the service is a shared link to a session, I believe it’s important for that URL to be easy to understand, ie. no funky characters (# is a funky character for a common user in my opinion). People trust things more when they understand it better after all.

Solution

And a solution there is; it’s pretty close to the one I had in NGINX before - in CloudFront we can set custom error pages. Since I already knew that navigating to a resource that’s not found in the bucket returns 403, all I had to do is set up a Custom Error Response. Just redirect the user to the /index.html in case of a 403 Forbidden.

custom error response setup

With this solution I stood true to my promise that I’ll handle my core code as a black box: I did not change any of the current code. On the other hand all non-root resource requests will hit CloudFront, increasing the request counter - and that’s one of the things I have to pay for. We’ll see in the long run, at a point in the future I might reconsider using the HashRouter.

Now that the frontend migration is done, I’ll show you how the migration went with the backend.