This article is part of #25DaysOfServerless. New challenges will be published every day from Microsoft Cloud Advocates throughout the month of December. Find out more about how Microsoft Azure enables your Serverless functions.
Have an idea or a solution? Share your thoughts on Twitter!
Functions might be affordable for orchestrating and processing large and distributed data, but that does not mean they automatically offer a great experience.
Hear me out. I am not saying Functions are not performant; I am saying that what they are processing might not be.
How Important is Caching
I visited my friend, who had five kids, and it was fun. They had all these questions about why the sky is blue? Why it rains? Why am I always on my computer? Where babies come from? Why grown-ups eat much more?
The first time each of these questions came through, they totally threw me off — super overwhelming stuff. Of course, I have to be a smart grown-up and know these things so I would pretend I have to get something done on my phone then get back to them. That way I can buy myself a few minutes to digest Wikipedia.
If you know kids, you know they like to hear the same thing over and over again if they find that thing interesting. Some of the answers were fun so they would keep asking me every day why the sun is yellow.
The first times I would find myself stuck trying to figure out the right answers. Subsequent times it was just there in my head ready to pop out after a question. The first time took 5-15 minutes; second time took 5-15 seconds.
Unfortunately, for computers, things are garbage in garbage out (don’t let AI tell you otherwise). A computer won’t just know that it’s time to remember things for the next request. Asking a computer to remember answers from its previous process and respond with that answer at a faster speed since it won’t be processed again, is what we refer to as caching.
How to Cache with Serverless
Caching with Serverless is like caching anywhere else. You need two things:
- A cache storage
- A caching strategy
The storage is where the response to be cached is stored; the strategy embodies a list of condition that had to be met to either store something in a cache or remove something from the cache.
Serverless functions back the 25DaysOfServerless website, and the challenges you see are cached. Here is what our caching strategy looks like:
When you visit a challenge page, say https://25daysofserverless.com/calendar/12 we:
- Check if someone has asked for that same challenge
- If yes, we send you what we sent that person
- If no, you are the first person
- Since you are the first person, a. We fetch you the challenge from Github b. Process the content, c. Upload some images to a CDN d. Cache the content for subsequent requests e. Send you the processed content
Code Examples
Start with creating a basic serverless function for free:
Before we can cache, we need to set up cache storage. Everyone seems to love Redis; let’s go with that. Here’s a super quick tutorial to set up cache storage and cache server using Redis for free:
In your function root, via a CLI tool, install a Redis SDK. I am using Node so I can install with npm:
npm install --save redis
Import the SDK to your function’s index.js
file:
const redis = require('redis');
The SDK needs to know how to talk to you Azure Redis Cache. You need to give it some connection credentials. This can be done by creating a client:
const createClient = redis.createClient(6380, process.env['REDIS_CACHE_HOSTNAME'], {
auth_pass: process.env['REDIS_CACHE_KEY'],
tls: { servername: process.env['REDIS_CACHE_HOSTNAME'] }
});
The credentials are accessed from the local.settings.json
env variables. Set those variables with the values you got when creating the Redis server:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "node",
"REDIS_CACHE_HOSTNAME": "🙈",
"REDIS_CACHE_KEY": "🙈"
},
}
Redis Node APIs are powered by callbacks — old school, right? We want promises, async and all that cool stuff. You can promisify the APIs using the util
library from Node core:
const promisify = require('util').promisify;
const getAsync = promisify(client.get).bind(client);
const setAsync = promisify(client.set).bind(client);
Now you can get and set items from your cache:
async function processRequest(week, day) {
const pathToChallenge = `week-${week}/challenge-${day}/README.md`;
const client = redisClient();
const challengeFromRedis = await redisGet(pathToChallenge);
// Check if challenge exists in cache
if (challengeFromRedis) {
// It does exist
// Response
return { content: challengeFromRedis };
} else {
// Does not exist
// Fetch challenge from Github
const response = await fetchChallenge(week, day);
// Process markdown
const decodedReadme = decodeContent(response.data.content);
// Upload markdown images to CDN
const markedContent = await parseMarkdown(decodedReadme, week, day);
// Add to cache
await redisSet(pathToChallenge, markedContent);
// Respnse
return { content: markedContent };
}
}
If you are wondering how we invalidate the cache when the content on Github is updated, take a look at my solution for [Challenge Day 3](Link to day 3).
Want to submit your solution to this challenge? Build a solution locally and then PR this repo. If your solution doesn't involve code you can record a short video and submit it as a PR to the same repo. Make sure to tell us which challenge the solution is for. We're excited to see what you build! Do you have comments or questions? Add them to the comments area below.
Watch for surprises all during December as we celebrate 25 Days of Serverless. Stay tuned here on dev.to as we feature challenges and solutions! Sign up for a free account on Azure to get ready for the challenges!