Running Cloudflare Workers (workerd) on Docker/Kubernetes

Peter Mbanugo - Sep 30 '22 - - Dev Community

Cloudflare Workers are one of the many ways to deploy serverless code with exceptional performance, reliability, and scale. Although I've never used it until today (i.e 29 Sept. 2022), I liked their approach and how it uses standard Web APIs. I personally prefer runtimes/APIs that follow open standards so that portability is easy, and knowledge can be reused when working with a different client/product using any public cloud provider. I was excited when they announced earlier this year that they will open-source the workers runtime. I waited patiently and eagerly until they announced and made the code public.

YAY! It's an open-source-friendly license (Apache 2.0).

They called it workerd, a JavaScript/Wasm runtime based on the same code that powers Cloudflare Workers. It can be used to self-host applications and is intended to be a production-ready web server for that purpose. Though at the time of writing this post, it's still in open beta and a lot of things are likely to change. You can read their blog post to get the inside scoop about workerd and what to expect.

Running worked In Docker

Containers provide a way to package and run applications in any environment consistently, from on-premises servers to Kubernetes clusters running in any public cloud. That portability, combined with the standard-based runtime from workerd could give us more flexibility and power to run applications in any environment and likely 🤔 any language because of WASM.

Enough blabbing from me about workerd, serverless functions, and containers. Let's dig into some code and how it works.

I took a sample app from Walshy on GitHub, made a few tweaks and ran it on Docker. It's a URL shortening service that's implemented in a few lines of code. You can find the source code on GitHub but I'll show off some code here.

The only way to run workerd is to compile and build the daemon on your own, or used what's available on npm. So I have an npm project with some dependencies to run the app.

I will only show some important files in this post. Feel free to clone the complete code on GitHub

Here's the index.js

import { Hono } from "hono";
import createRedirect from "./create";
import handleRedirect from "./redirect";

const app = new Hono();

app.get("*", handleRedirect);
app.post("/create", createRedirect);

export default app;
Enter fullscreen mode Exit fullscreen mode

The app uses Hono, a small, simple, and ultrafast web framework for Cloudflare Workers, Deno, Bun, and Node. The /create route will map and save the URL to redirect to a slug/code. That slug is used to fetch the destination URL from the database and redirects to it when you make a GET request.

Here's the code for createRedirect in create.ts

import { HonoContext } from "hono/dist/context";
import { nanoid } from "nanoid";

interface RequestBody {
  slug?: string;
  destination?: string;
}

const createRedirect = async (c: HonoContext) => {
  const body = await c.req.json<RequestBody>();

  if (!body.destination) {
    return c.json({ error: "Missing destination!" });
  }
  const slug = body.slug ?? nanoid(7);

  const options = {
    method: "POST",
    headers: {
      Authorization: `Bearer ${c.env.UPSTASH_REDIS_REST_TOKEN}`,
    },
    body: JSON.stringify({ slug, destination: body.destination }),
  };
  const url = `${c.env.UPSTASH_REDIS_REST_URL}/set/${slug}`;

  await fetch(url, options);

  return c.json({
    message: "Created redirect!",
    slug,
  });
};

export default createRedirect;
Enter fullscreen mode Exit fullscreen mode

I save the data to a Redis database running on Upstash. It's simple enough that I just use the Fetch API to read and write from the database (long live HTTP 🤪).

The redirect logic is similarly simple. Here's redict.ts:

import { HonoContext } from "hono/dist/context";

const handleRedirect = async (c: HonoContext) => {
  try {
    const { pathname } = new URL(c.req.url);
    const path = pathname.slice(1);

    const init = {
      headers: {
        Authorization: `Bearer ${c.env.UPSTASH_REDIS_REST_TOKEN}`,
      },
    };
    const url = `${c.env.UPSTASH_REDIS_REST_URL}/get/${path}`;

    const res = await fetch(url, init);
    const { result } = await res.json<{ result: string }>();
    const { destination }: { destination: string } = JSON.parse(result);

    if (destination === null) {
      c.status(404);
      return c.json({ error: "Not found!" });
    }

    return c.redirect(destination);
  } catch (e) {
    return c.json({
      error: "exception caught! Message: " + e.message,
      stack: e.stack,
    });
  }
};

export default handleRedirect;
Enter fullscreen mode Exit fullscreen mode

The data is read from the database and a redirect instruction is returned to the client using c.redirect(). You would have noticed that c.env. is used to retrieve the environment variable in the app. Unlike Node where you'd use process.env to read environment variables, it's different here. These values are defined as bindings in config.capnp. I struggled with having a way to inject them at runtime but didn't get any simple solution. There were a couple of suggestions I got but that's out of scope for this post and maybe there'll be easier ways to deal with secrets in the future.

Finally, we need a Dockerfile to build the image. Here's what I used:

FROM node:18-slim

# Install dependencies
RUN apt-get -qq update && \
    apt-get install -qqy --no-install-recommends \
        ca-certificates \
        libc++-dev &&  \
    rm -rf /var/lib/apt/lists/*

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm ci --only=production

COPY . .

EXPOSE 8080
CMD [ "npm", "start" ]
Enter fullscreen mode Exit fullscreen mode

With that, you can build the docker image using the docker build . -t redirect command to build an image named redirect. You can run it using the docker run command.

Running On Kubernetes

If you can run this on Docker, what are the possibilities of running the same image in Kubernetes or other container-as-a-service platforms like Google Cloud Run or AWS Fargate?

I use Kubernetes for some client projects and I'm an advocate for Knative, a solution to build serverless, event-driven applications on Kubernetes. By serverless, I mean running a container that can automatically scale down to zero. How you manage the cluster is up to you, it could be self-managed on-premise or in the cloud. Perhaps use Google GKE Autopilot where you have almost no need to manage the cluster. The point here is service can scale down to zero and you can better utilise the server resources.

Running the image using Knative can be as simple as running the command kn service create url --image pmbanugo/redirect -n workers. The kn CLI has commands that makes it easy to deploy services without writing YAML files.

Give it a go if you use Knative

In fact, I have the URL shortening service running in my Civo Kubernetes cluster. You can access it using https://url.workers.74.220.31.62.sslip.io. For example, this URL should redirect you to my website: https://url.workers.74.220.31.62.sslip.io/5ejQD6p.

You can create a shortened link using the command below

curl -X POST -d '{"destination": "https://YOUR-URL.com"}' https://url.workers.74.220.31.62.sslip.io/create
Enter fullscreen mode Exit fullscreen mode

It will return the slug that's used to save your URL and you can take that slug and append it to https://url.workers.74.220.31.62.sslip.io/ to get redirected to the saved destination URL.

What's Next?

That's all, feel free to get the source code and play with it as much as you like.

What I hope to see is an official image for workerd, improved documentation, and perhaps a way to inject environment variables from the container's PATH. It's my first time trying workers and I hoped to share how I got it running on Kubernetes. I hope the project grows to become stable and I look forward to having it as an alternative to running nanoservices/FaaS on Knative/Kubernetes.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .