Building a Lightning-Fast Serverless Blog on AWS (Part 4)

Depascale Matteo
11 min readApr 18, 2023

--

Learn how to create a low-latency Single Page Application (SPA) using SvelteKit, S3, and CloudFront while eliminating slow CORS šŸ¢, and build one of the fastest blogs on the web.

āš ļø Hey! This blog post was initially published on my own blog. Check it out from the source: https://cloudnature.net/blog/building-a-lightningfast-serverless-blog-on-aws-part-4

Figure 1: My website performances.

Introduction

As you may know, Iā€™m a fanatic about speed šŸŽļø, and recently, my friend Gianmarco Pettenuzzo and I had the opportunity to work on our own blog. Our goal? To build one of the fastest blogs out there! Why? Well, have you ever run a performance tool on one of those ā€œalready-madeā€ websites? Most of the time, itā€™s a bloodbath šŸ©ø.

Performance from 4 well known blog.
Figure 2: Performance from 4 well known blog.

They donā€™t excel at everything, thatā€™s for sure. Thatā€™s why weā€™re going to build one of the most performant blog on the web āœØ.

You know that quote from Norman Vincent Peale?
ā€œShoot for the moon. Even if you miss, youā€™ll land among the starsā€.
Thatā€™s exactly what weā€™re doing here! Shooting for the perfect score, letā€™s see where weā€™ll land.

p.s. you can see the website results in the cover image, the top one is run in desktop mode while the second is for mobile. I too hate getting clickbaitedšŸ˜‰.

This post is part 4 of a really long journey. If you missed the first 3 parts, please go ahead and read them first. Iā€™ll wait šŸ‘‡.

Welcome back! Hereā€™s how this article is structured:

  1. A simple description of the website and its features;
  2. The overall AWS serverless infrastructure we used;
  3. The optimizations we made from a frontend perspective;
  4. The optimizations we made from a backend/infrastructure perspective.

Backoffice and website features

When building a blog, itā€™s essential to consider both frontend and backend features that will make your site user-friendly, functional, and efficient.

Figure 3: Website landing page.

The frontend

The frontend of a blog is what users see and interact with. Here are some key features to consider right from the beginning:

  • A visually appealing landing page that catches usersā€™ attention and gives them a glimpse of what the blog is about;
  • A list of all your blog posts that displays all posts in one place. Each article should have tags so that users can easily find content on specific topics;
  • A main page for each blog post that should include a clear title, images, text, and social sharing buttons. You may also want to include a section with related articles to keep users engaged.

The backoffice

The backoffice is where the real creativity begins, and you have complete control over your content. To create a simple yet effective backoffice, you need:

  • A login page to ensure that only authorized users can post content, you surely donā€™t want anyone else posting for you šŸ˜œ;
  • A list of all your blog posts, just like the frontend, but it also includes draft posts and allows you to delete blog posts;
  • A create/update blog post page where you can work your magic and bring your ideas to life. It must be fast and engaging. Weā€™ve kept it simple, but we also have plenty of ideas on how to improve it.

AWS Serverless infrastructure

Below, you can find the final infrastructure for our lightning-fast serverless blog built with SvelteKit. As you can see, we have three CDNs: one that only admins can access and is protected by Cognito, one for our public-facing blog, and the last one for static content. Our main CDN is also in front of the API Gateway, which eliminates the need for using CORS (weā€™ll talk about this later in the article).

Figure 4: AWS serverless infrastructure.

To manage our static content, we cached everything that was possible to cache, including for the API Gateway. However, this presents a big problem: how can we flush the cache? To solve this issue, we created two Lambda functions:

  • One that gets triggered when a new post is updated/created/deleted and flushes the cache for the API layer;
  • The other one gets triggered when a new file is uploaded into the S3 bucket and invalidates the content cache.
Meme about users getting the old cached version even after invalidation
Figure 5: Didnā€™t think of browser cache meme.

Caching is a vast topic that deserves its own discussion. To learn more about optimizing content delivery for web apps with CloudFront and S3, check out my previous blog post:

https://cloudnature.net/blog/optimizing-content-delivery-the-complete-guide-through-s3-caching-and-cloudfront

tl;dr: optimizing content delivery for webapp with CloudFront and S3. First a bit of Caching knowledge, followed by real world problems and solutions. In the end I talk about my own website and how I was able to optimize it with some caching strategies.

Frontend optimizations

Iā€™m always struggling with HTML/CSS, and I get the constant feeling that what Iā€™m doing could be done better and in far less time.

When it comes to optimizing frontend performance, there are several factors to consider, including network speed, static assets, and best coding practices. Here are some tips that Iā€™ve learned over the years to optimize my websiteā€™s performance.

Preconnecting and dns-prefetching

Preconnecting and dns-prefetching can speed up future loads from a given origin by preemptively performing part or all of the handshake. This means that if youā€™re downloading multiple images, you wonā€™t have to make multiple DNS lookups; the request will be made only once. To implement this, you can use the following two lines of code (depending on how many URLs you want to prefetch):

<link rel="preconnect" href={PUBLIC_BASE_URL} />
<link rel="dns-prefetch" href={PUBLIC_BASE_URL} />

Prefetching content

Another way to speed up page load time is to prefetch content when users click or hover on links within your website. SvelteKit comes with a built-in keyword for this called data-sveltekit-preload-data="hover". Simply add this keyword in the body attribute within your app.html to enable this feature.

Caching policies

Caching policies are crucial for optimizing the performance of a static website and reducing costs. In our website, we implemented caching strategies to cache as much as possible. Here are some of the caching policies we implemented:

  • We invalidate CloudFront cache whenever we create or update posts, images, or deploy the web application;
  • Static files are cached for one year to reduce the number of requests;
  • The only file that is not cached is index.html. SvelteKit, like most frontend frameworks, uses a ā€œburst cachingā€ strategy. This means that when building a new version, the files are named differently, and their names are updated into index.html. If we cache index.html, the browser wonā€™t fetch the new content from the CDN until the cache expires.

There is a lot more to say about caching policies, so if you havenā€™t already, please read my previous article on optimizing content delivery with S3 caching and CloudFront. šŸ‘‡

https://cloudnature.net/blog/optimizing-content-delivery-the-complete-guide-through-s3-caching-and-cloudfront

Next-Generation HTTP protocol

Itā€™s essential to use the latest HTTP protocols, which means deprecating HTTP/1 and using HTTP/2 and HTTP/3. Most browsers and frontend frameworks support these versions, so you just need to check your infrastructure and enable them if theyā€™re not enabled by default. In our case, we needed to enable the latest HTTP protocol in CloudFront with the following line of code:

httpVersion: HttpVersion.HTTP2_AND_3

Lazy and eager loading

Lazy and eager loading are important techniques to optimize web page loading times. Eager loading means that resources are loaded immediately, while lazy loading means that resources are loaded only when they are needed. A combination of both techniques is the key to success.

For example, for the blog post list, it is ideal to eager load the first six images and lazy load the remaining six. With pagination set up, this can be achieved easily. For the article itself, it is best to eager load the first image and lazy load the rest, we donā€™t need to rush.

Image Optimization

Images can be a significant bottleneck when it comes to web page loading times. Proper image optimization can significantly improve the performance of the web page. Some techniques to optimize images include:

  • Using the next-gen image format such as WEBP, which is supported by 90% of modern browsers.
  • Serving images in different sizes, depending on the device being used, to reduce the amount of data being downloaded.

psssss wanna know a secretšŸ¤«? Use this tool to compress your images šŸ‘‰https://squoosh.app/ šŸ‘ˆ. Itā€™s free and maintained by the Google Web DevRel team.

Lighthouse

Lighthouse is a powerful tool that can be used to evaluate the performance of a web application and provide suggestions on how to improve it. If you have never used it before, be sure to check it out at šŸ‘‰ https://developers.google.com/web/tools/lighthouse.

Backend optimizations

This one was fun, I went above and beyond to optimize every piece of software I could. Letā€™s start with the first one, my favorite.

Removing CORS

First up, letā€™s talk about Cross-Origin Resource Sharing (CORS). Browsers use CORS to control access to resources in a different domain. When a browser needs to access a resource on a different domain, it sends a ā€œpreflight requestā€ before making the actual request. This adds an extra request and slows down the overall response time.

To eliminate this bottleneck, we removed CORS altogether. By placing the API layer and the client within the same domain, we were able to save up to 3 times the API response time, bringing it down from ~70ms to ~20ms.

Letā€™s see the architecture:

Infrastructure with CORS and infrastructure without CORS.
Figure 6: Without CORS and with CORS architecture.

You probably already guess it! To achieve this, we used CloudFront multi-origin configuration. Hereā€™s a code snippet that shows how we did it:

As you can see, this is a simple CloudFront distribution. For the complete project, check out the GitHub link at the end of this article.

By removing CORS, we were able to significantly improve the performance of our serverless blog.

Caching policies

When it comes to caching, leveraging a CDN is an effective way to improve performance. Since we have already set up CloudFront as our CDN, we can use it as our cache as well. To determine the caching duration, we can set a cache expiration time of one year for our static assets. This means that CloudFront will cache our assets for up to a year, and if we need to update our content, we can invalidate the CloudFront cache and the latest version of the content will be served to the end-users.

However, itā€™s important to note that our frontend and backend are both hosted on the same CDN. Therefore, we need to be careful when invalidating the cache. We should only invalidate the cache for the API layer when updating content. For example, if we are updating the content of a blog post, we only need to invalidate the cache for the paths /api/posts/${id} and /api/posts/${slug}. If we are creating or deleting a blog post, we need to invalidate the cache for the paths /api/posts?*, /api/posts/${id}, and /api/posts/${slug}.

Minify code and clean dependencies

Minifying our code and cleaning up unnecessary dependencies is a simple yet effective way to improve the performance of our application. By removing comments, whitespace, and other unnecessary characters, we can significantly reduce the size of our code. Additionally, by ensuring that our packages only include the necessary dependencies, we can further reduce the size of our application and improve its performance.

To achieve these improvements, we can use esbuild in our deployment process. For example, if we are using the serverless framework, we can add the following lines to our serverless.ts file:

esbuild: {
bundle: true,
minify: true,
sourcemap: true,
exclude: [
'@aws-sdk/client-cloudfront',
'@aws-sdk/client-dynamodb',
'@aws-sdk/client-s3',
'@aws-sdk/lib-dynamodb',
'@aws-sdk/s3-request-presigner',
'@aws-sdk/util-dynamodb'
],
target: 'node18',
define: { 'require.resolve': undefined },
platform: 'node',
},

In this snippet, the minify option is used to minify our code, and the exclude option is used to exclude unnecessary dependencies from the build process. Since these dependencies are already present in the nodejs18 lambda container, there is no need to include them in our package.

Return only required data

To optimize the performance of your serverless API, itā€™s important to ensure that it returns only the data that is required by the particular page or request. For example, if youā€™re building a list of blog posts, you donā€™t need to include the entire content of each post in the response. This can slow down the page load time, especially if there are a large number of posts.

Similarly, when displaying related blog posts, you can exclude the content from the response to speed up the page load time. By returning only the necessary data, you can reduce the amount of data that needs to be transferred and processed, which can significantly improve the performance of your API.

Resize images

Resizing images can have a significant impact on the performance of your serverless application, especially if youā€™re serving a large number of images. One way to resize images is to use the Sharp library in a Lambda function triggered by an S3 upload event. This allows you to automatically resize images as theyā€™re uploaded to your S3 bucket. You can find more info about it in this blogpost: https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/.

In my case I donā€™t have that time so I will do it by hand with the little secret Iā€™ve shown you beforešŸ˜Š.

Conclusion

In this series, we have explored how to build a lightning-fast serverless blog on AWS with SvelteKit, S3, and CloudFront while eliminating slow CORS. Weā€™ve looked at how to improve performance from both frontend and backend perspectives. Now, we are getting closer to releasing the website to production and seeing the real-world performance results.

šŸ™Huge thanks to Gianmarco for helping me out and creating a really cool website, check him out šŸ™‡.

You can find the project here https://github.com/Depaa/website-blog-part4šŸ˜‰

This series is a journey to production, and we have many more blog posts planned, here the list Iā€™ll try to keep updated:

Thank you so much for reading! šŸ™ I will keep posting different AWS architecture from time to time so follow me on Medium āœØ or on LinkedIn šŸ‘‰ https://www.linkedin.com/in/matteo-depascale/.

--

--

Depascale Matteo
Depascale Matteo

Written by Depascale Matteo

Hi I'm MatteošŸ‘‹! Iā€™m an AWS Cloud Engineer and AWS Community Builder passionate about Serverless on AWS. Follow me https://www.linkedin.com/in/matteo-depascale/

Responses (1)