Typescript code

How Does Cloudflare Workers Compare to AWS Lambda?

Cloudflare Workers

After exclusively running my workloads at AWS for years and years, I’ve recently started building some applications with Cloudflare Workers. Initially this was just to teach myself a new technology. However, I’ve been so impressed I’ve started migrating several AWS Lambda applications to Cloudflare.

Workers are a stateless, serverless, compute option on Cloudflare’s edge.   What does that mean? It means you can small bits of code, which don’t require session state, on Cloudflare’s CDN network, in over 300 cities across 100+ countries. The code will run very close to your end users, network-wise, meaning the lowest possible latency between your user and your Worker code. This can dramatically improve end user performance, especially if your web application makes many API calls (as many modern web apps do). 

Quick Introduction to Cloudflare

Before I go further, if you aren’t familiar with Cloudflare they are a global CDN company. Similar to AkamaiFastly, and AWS CloudFront, they provide performance and security solutions for web sites, web apps, and much more.  Cloudflare started out as a security service, rather than a pure CDN, but quickly added performance features and became an industry leader in the CDN space. Their focus on performance has led to continued evolution of services and functionality offered by Cloudflare.

There are many good CDNs out there, but without getting too far off track, Cloudflare is probably my favorite and what I use for all my sites and web applications. 

Cloudflare Workers vs Lambda and Lambda@Edge

If you are aware of serverless computing, you’re probably aware of AWS Lambda, the 800 lb gorilla in the word of serverless compute. Cloudflare Workers are more similar to Lambda@Edge or CloudFront Functions in that they run deployed out to the Edge, i.e. running on many global PoPs (points of presence), with low latency to the end user.  But we’ll compare them to general Lambda as well, since Cloudflare doesn’t offer a non-Edge serverless compute option like Lambda. 

AWS CloudFront Functions

First off CloudFront Functions are fast and run in edge location data center points of presence (PoPs). However, they are so limited in capabilities (no network access, filesystem access, and request body access, with a sub-millisecond execution time limit) that I don’t really consider them comparable for most use cases.

AWS Lambda and Lambda@Edge

Lambda and Lamda@Edge  are similar, with lots of features, and ability to access your AWS resources.  Lambdas can be written in a wide range of programming languages including Node.js, Java, python, .NET, ruby and more. Lambdas run in your specified Region and are automatically designed to be resilient across the Availability Zones in that Region.  AWS Lambdas are the default for serverless computing today.

Lambda@Edge is similar, however it deploys and runs your code at whichever Region is closest to the end user, similarly to Cloudfront Region level caches.  They cost more, but should provide better performance than normal Lambdas for a geographically diverse user base. 

Cloudflare Workers

Cloudflare Workers run at the Edge, similar to CloudFront Functions, but without all the limitations of CloudFront Functions. As a major global CDN, CloudFlare also has many more edge locations across the globe than Amazon does.  So your code will be running much closer to your end users, no matter where they are, than any AWS Function or Lambda based solution. This means that in theory Cloudflare Workers should have the lowest latency of all the options. Currently Workers only support the JavaScript runtime, so they are more limited than AWS Lambdas in that regard. 

Simple Comparison Table

TechnologyWhere Does it Run?Supported LanguagesResource Limitations
AWS CloudFront FunctionsAWS Edge PoPsJavaScriptExtremely Limited
AWS LambdaAWS RegionJavaScript, Java, .NET, python, and moreReasonable Limits
AWS Lambda@EdgeAll AWS RegionsJavaScript, Java, .NET, python, and moreReasonable Limits
Cloudflare WorkersCloudflare Edge PoPsJavaScriptReasonable Limits
Simple Serverless Comparison Table

The Services Ecosystem 

A big difference between AWS Lambdas and Cloudflare Workers is the available service ecosystem. At AWS you can essentially access any AWS service or you own AWS deployed services from your Lambda relatively easily.  You can trigger your Lambdas from HTTP requests via API Gateways, or events from numerous AWS service sources. You can use Lambdas in Step Functions, and take advantage of a large number of existing integrations to and from Lambdas within your AWS ecosystem. 

Cloudflare has a much more limited set of services available in their ecosystem. These include KV (key-value store), R2 (S3 compatible object storage), D1 (SQL database), Durable Objects (Globally co-ordinated strongly consistent transactional storage), Vectorize (Vector search database for AI embeddings), Queues, and Hyperdrive (a service that allows you to connect to an existing database on-premise or in the cloud).  The good news is that these services are design from the ground up for global edge deployment and have various degrees of strong or eventual consistency, and are globally resilient without you having to do anything.  They also have Workers AI in open beta, which provides global serverless GPU based AI/ML operations using popular open-source models.

This really comes down to what type of services your serverless function needs. If all you need is a datastore (KV or RDBMS) then Cloudflare Workers are a great choice.  If you need deeper integration to AWS services like Cognito or Redshift then AWS Lambda is a better option.  Of course you can always call a publicly exposed API at AWS from your Cloudflare Worker if you need to integrate with workloads running at AWS, but you lose some of the performance advantage of being able to handle the request from the Edge.

Limitations

Each option, including Workers, has a long list of limitations. These include code size, compute resources, network resources, and more. There are also differing limits based on your plans, etc.. So rather than try to analyze those here, I’ll simply say to do some homework so you don’t run into any surprises. 

Simplicity

Simplicity is becoming increasingly rare in technology these days. The long term value of simple easy to manage solutions cannot be overstated. This is where Cloudflare Workers really shine for me. The simplicity of development and deployment. Cloudflare provides a command line tool called Wrangler, which makes creating, developing, testing, and deploying Workers projects very straightforwards. It also lets you create and interact with other Cloudflare Edge services like KV, D1, R2, etc..  

It manages a local development environment, including local versions of data stores and queues, making development and local testing very easy. You can also develop and test against the cloud based data stores and services easily.  It’s similar to AWS SAM, but simpler and quicker. 

When it’s time to deploy your Worker function, Wrangler handles that in less than 2 seconds.  That’s right, full deployment to the Cloud, in less than 2 seconds.  Being able to push changes so quickly is like magic. You can also run Wrangler from your CI/CD pipeline of course for a more mature process.  

Cloudflare Workers deployments are also much simpler than AWS Lambdas for the most part, because unlike at AWS, you do not have to define an API Gateway, IAM roles, permissions, complex access rules, etc…. You just deploy your Worker, and in the configuration file you list what services, such as a specific database or KV store, it should be allowed to access. And it automatically sets it all up for you.  There’s no API Gateway, and the event payload hassle that brings, its just straightforwards HTTP requests. 

Environments

One challenge with AWS Lambda is managing different environments like DEV, STG, and PRD. For larger organizations, using separate AWS accounts for each environment is common, but this isn’t always practical for smaller teams or individual developers. Even with tools like AWS CDK, managing environment-specific configurations can be complex and often leads to conflicts, especially with resources auto-created by CDK or CloudFormation.

In contrast, Cloudflare Workers streamline this process. With the wrangler CLI tool, you can easily deploy to different environments using simple commands like wrangler deploy -e prod. This approach allows for straightforward creation and management of multiple environments. Additionally, wrangler supports quick rollbacks, deployment history, and secrets management, greatly simplifying the development lifecycle compared to AWS Lambda.

This ease of managing environments with Cloudflare Workers allows developers to focus more on coding and less on the complexities of infrastructure, providing a smoother workflow for projects of any size.

Wrangler also provides easy commands to rollback to a previous version, to list recent deployments, manage secrets, etc…. 

So overall I spend much less time defining my architecture than with AWS Lambdas, the local development environment is quick and easy to work with, deployments are super fast, and managing things like multiple environments, rollbacks, etc.. are all very easy.  I can focus on writing code and running my application instead. 

Cost

Each application and use case is going to be different, so I suggest running your own calculations, or better yet PoCs.  That said, generally Cloudflare Workers pricing has larger free tiers, lower costs on billable metrics, and most importantly no semi-hidden costs (egress bandwidth, API Gateway costs, etc…).

Another big difference is that Cloudflare Workers bill on CPU clock not wall clock time. So if your serverless function might spend some time waiting on IO, for an external API call, AI service call, etc…) you DON’T pay for that wait time! Depending on your application, this could be a huge cost savings for you.  You can also easily configure usage limits in your Worker’s configuration file or in the Dashboard to avoid unexpectedly large bills!

Many published 3rd party comparisons show Cloudflare Workers costing 25-50% less than AWS Lambdas. But each application, and traffic load, will be different. 

Summary

Now, obviously if your needs exclude Cloudflare workers, due to runtime limitations or needing access to AWS services, etc… then AWS Lambda or Lambda@Edge are great options.  

However, if your use case can be met in the Cloudflare edge environment, then Workers can be a fast, very low latency, simple and easy to manage, cheaper option.  I’ve moved a few personal Lambda based applications over to Cloudflare Workers so far, and am very happy with the benefits. 

It’s well worth evaluating if Cloudflare Workers are suitable to your project needs!


Posted

in

by

Comments

One response to “How Does Cloudflare Workers Compare to AWS Lambda?”

  1. Nicolas Montoya Avatar
    Nicolas Montoya

    Another great thing with Workers, is that you are not billed for compute time! So you can have workers that may take 5 to 10 seconds per invoke, but you won’t get billed for that duration, there are only limits for CPU work time.

    So if you have to call 3rd party APIs and use async/await, and end up waiting 20 seconds for a super slow API response, you won’t get billed for that, only the time iterating through a map or implementing some algorithm. Which is gamechanger for developing more sophisticated workflows!

    There has even been some development of building state machines, similar to AWS StepFunctions, with Cloudflare workers using xState: https://github.com/drivly/state.do

Leave a Reply

Your email address will not be published. Required fields are marked *

PHP Code Snippets Powered By : XYZScripts.com