To the author: The ASCII-art Architecture diagram is very broken, at least on my Pixel phone with Firefox.
These kinds of text-based diagrams are appealing for us techies, but in the end I learned that they are less practical. My suggestion is to use an image, and think of the text-based version as the "source code" which you keep, meanwile what gets published is the output of "compiling" it into something that is for sure always viewable without mistake (that one is where we tend to miss it with ascii-art).
Isn't the whole point of Cloudflare's Workers to pay per function? If it is self-hosted, you must dedicate hardware in advance, even if it's rented in the cloud.
I see anything that reduces the relience on vendor lock-in I upvote. Hopefully cloud services see mass exodus so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT.
Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
I'm worrying that the increasing ram prices will drive more people away from local and more to cloud services because if the big companies are buying up all the resources it might not be feasible to self host in a few years
10% is the number I ordinarily see, counting for members of staff and adequate DR systems.
If we had paid our IT teams half of what we pay a cloud provider, we would have had better internal processes.
Instead we starved them and the cloud providers successfully weaponised extremely short term thinking against us, now barely anyone has the competence to actually manifest those cost benefits without serious instability.
Wait what? can you show me some sources to back this up? I assume you are exaggerating but still, what would be the definition of cheap is interesting to know.
I don't think after the fact that ram prices spiked 4-5x that its gonna be cheaper to self host by 100x, Like hetzner's or ovh's cloud offerings are cheap
Plus you have to put a lot of money and then still pay for something like colocation if you are competing with them
Even if you aren't, I think that the models are different. They are models of monthly subscription whereas in hardware, you have to purchase it.
It would be interesting tho to compare hardware-as-a-service or similar as well but I don't know if I see them for individual stuff.
But they have scale. A small company will save less because it’s not that much more work to handle say a 100 node kubernetes cluster vs a 10 node kubernetes cluster.
True, workerd is open source. But the bindings (KV, R2, D1, Queues, etc.) aren't – they're Cloudflare's proprietary services. OpenWorkers includes open source bindings you can self-host.
I tried to run it locally some time ago, but it's buggy as hell when self-hosted. It's not even worth trying out given that CF itself doesn't suggest it.
> so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT
How is the cost of NAT free?
> Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
I don't doubt clouds are expensive, but in many countries it'd cost more to DIY for a proper business. Running a service isn't just running the install command. Having a team to maintain and monitor services is already expensive.
I think we’re in violent agreement, but you were ambiguous about what “cost” meant. It seems you meant “cost of providing NAT” but I interpreted it as “cost to the customer.”
I've been working on this for some time now, starting with vm2, then deno-core for 2 years, and recently rewrote it on rusty_v8 with Claude's help.
OpenWorkers lets you run untrusted JS in V8 isolates on your own infrastructure. Same DX as Cloudflare Workers, no vendor lock-in.
What works today: fetch, KV, Postgres bindings, S3/R2, cron scheduling, crypto.subtle.
Self-hosting is a single docker-compose file + Postgres.
Would love feedback on the architecture and what feature you'd want next.
The problem with sandboxing solutions is that they have to provide very solid guarantees that code can't escape the sandbox, which is really difficult to do.
Any time I'm evaluating a sandbox that's what I want to see: evidence that it's been robustly tested against all manner of potential attacks, accompanied by detailed documentation to help me understand how it protects against them.
This level of documentation is rare! I'm not sure I can point to an example that feels good to me.
So the next thing I look for is evidence that the solution is being used in production by a company large enough to have a dedicated security team maintaining it, and with real money on the line for if the system breaks.
Yes, exactly. The other reason Cloudflare workers runtime is secure is that they are incredibly active at keeping it patched and up to date with V8 main. It's often ahead of Chrome in adopting V8 releases.
I didn’t know this, but there are also security downsides to being ahead of chrome — namely, all chrome releases take dependencies on “known good” v8 release versions which have at least passed normal tests and minimal fuzzing, but also v8 releases go through much more public review and fuzzing by the time they reach chrome stable channel. I expect if you want to be as secure as possible, you’d want to stay aligned with “whatever v8 is in chrome stable.”
Fair point. The V8 isolate provides memory isolation, and we enforce CPU limits (100ms) and memory caps (128MB). Workers run in separate isolates, not separate processes, so it's similar to Cloudflare's model. That said, for truly untrusted third-party code, I'd recommend running the whole thing in a container/VM as an extra layer. The sandboxing is more about resource isolation than security-grade multi-tenancy.
I think you should consider adjusting the marketing to reflect this. "untrusted JavaScript" -> "JavaScript", "Secure sandboxing with CPU (100ms) and memory (128MB) limits per worker" -> "Sandboxing with CPU (100ms) and memory (128MB) limits per worker", overhauling https://openworkers.com/docs/architecture/security.
Over promising on security hurts the credibility of the entire project - and the main use case for this project is probably executing trusted code in a self hosted environment not "execut[ing] untrusted code in a multi-tenant environment".
I don't think what you want us even possible. How would such guarantees even look like? "Hello, we are a serious cybersec firm and we have evaluated the code and it's pretty sound, trust us!"?
"Hello, we are a serious cybersec firm and we have evaluated the code and here are our test with results that proof that we didn't find anything, the code is sound; Have we been through? We have, trust us!"
In terms of a one off product without active support - the only thing I can really imagine is a significant use of formal methods to prove correctness of the entire runtime. Which is of course entirely impractical given the state of the technology today.
Realistically security these days is an ongoing process, not a one off, compare to cloudflare's security page: https://developers.cloudflare.com/workers/reference/security... (to be clear when I use the pronoun "we" I'm paraphrasing and not personally employed by cloudflare/part of this at all)
- Implicit/from other pieces of marketing: We're a reputably company with these other big reputable companies who care about security and are juicy targets for attacks using this product.
- We update V8 within 24 hours of a security update, compared to weeks for the big juicy target of Google Chrome.
- We use various additional sandboxing techniques on top of V8, including the complete lack of high precision timers, and various OS level sandboxing techniques.
- We detect code doing strange things and move it out of the multi-tennant environment into an isolated one just in case.
- We detect code using APIs that increase the surface area (like debuggers) and move it out of the multi-tennant environment into an isolated on just in case.
- We will keep investing in security going forwards.
Running secure multi-tenant environments is not an easy problem. It seems unlikely that it's possible for a typical open source project (typical in terms of limited staffing, usually including a complete lack of on-call staff) to release software to do so today.
Agreed. Cloudflare has dedicated security teams, 24h V8 patches, and years of hardening – I can't compete with that. The realistic use case for OpenWorkers is running your own code on your own infra, not multi-tenant SaaS. I will update the docs to reflect this.
Not if you're self-hosting and running your own trusted code, you don't. I care about resource isolation, not security isolation, between my own services.
Completely agree. There are some apps that unfortunately need to care about some level of security isolation, but with an open workers they could just put those specific workers on their own isolated instance.
Perhaps it might be helpful to some to also lay out the things that don't work today (or eg roadmap of what's being worked on that doesn't currently work?). Anyway, looks very cool!
Good idea! Main things not yet implemented: Durable Objects, WebSockets, HTMLRewriter, and cache API. Next priority is execution recording/replay for debugging. I'll add a roadmap section to the docs.
Forgive the uninformed questions, but given that `workerd` (https://github.com/cloudflare/workerd) is "open-source" (in terms of the runtime itself, less so the deployment model), is the main distinction here that OpenWorkers provides a complete environment? Any notable differences between the respective runtimes themselves? Is the intention to ever provide a managed offering for scalability/enterprise features, or primarily focus on enabling self-hosting for DIYers?
Thanks! Main differences:
1. Complete stack: workerd is just the runtime. OpenWorkers includes the full platform – dashboard, API, scheduler, logs, and self-hostable bindings (KV, S3/R2, Postgres).
2. Runtime: workerd uses Cloudflare's C++ codebase, OpenWorkers is Rust + rusty_v8. Simpler, easier to hack on.
3. Managed offering: Yes, there's already one at dash.openworkers.com – free tier available. But self-hosting is a first-class citizen.
The problem is that there’s not much of a market opportunity yet. Customers aren’t voting for WASM with their wallets like they are mainstream language runtimes.
Cool project, but I never found the cloudflare DX desirable compared to self hosted alternatives. A plain old node server in a docker container was much easier to manage, use and is scalable. Cloudflare's system was just a hoop that you needed to jump through to get to the other nice to haves in their cloud.
This is very nice! Do you plan to hook this up to GitHub, so that a push of worker code (and maybe a yaml describing the environment & resources) will result in a redeploy?
Not yet, but it's one of the next big features. I'm currently working on the CLI (WIP), and GitHub integration with auto-deploy on push will come after that. A yaml config for bindings/cron is definitely on the roadmap too.
I'm also working on execution recording/replay – the idea is to capture a deterministic trace of a request, so you can push it as a GitHub issue and replay it locally (or let an AI debug it).
It's a custom V8 runtime built with rusty_v8, not the actual Cloudflare runtime (github.com/openworkers/openworkers-runtime-v8). The goal is API compatibility – same Worker syntax (fetch handler, Request/Response, etc.) so you can migrate code easily. Under the hood it's completely independent.
To the author: The ASCII-art Architecture diagram is very broken, at least on my Pixel phone with Firefox.
These kinds of text-based diagrams are appealing for us techies, but in the end I learned that they are less practical. My suggestion is to use an image, and think of the text-based version as the "source code" which you keep, meanwile what gets published is the output of "compiling" it into something that is for sure always viewable without mistake (that one is where we tend to miss it with ascii-art).
Rendered perfectly on my iPhone 11 Safari.
Isn't the whole point of Cloudflare's Workers to pay per function? If it is self-hosted, you must dedicate hardware in advance, even if it's rented in the cloud.
I see anything that reduces the relience on vendor lock-in I upvote. Hopefully cloud services see mass exodus so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT.
Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
I'm worrying that the increasing ram prices will drive more people away from local and more to cloud services because if the big companies are buying up all the resources it might not be feasible to self host in a few years
the pricing is so insane it will always be cheaper to self host by 100x, that's how bad it is.
not 100x.
10% is the number I ordinarily see, counting for members of staff and adequate DR systems.
If we had paid our IT teams half of what we pay a cloud provider, we would have had better internal processes.
Instead we starved them and the cloud providers successfully weaponised extremely short term thinking against us, now barely anyone has the competence to actually manifest those cost benefits without serious instability.
Wait what? can you show me some sources to back this up? I assume you are exaggerating but still, what would be the definition of cheap is interesting to know.
I don't think after the fact that ram prices spiked 4-5x that its gonna be cheaper to self host by 100x, Like hetzner's or ovh's cloud offerings are cheap
Plus you have to put a lot of money and then still pay for something like colocation if you are competing with them
Even if you aren't, I think that the models are different. They are models of monthly subscription whereas in hardware, you have to purchase it.
It would be interesting tho to compare hardware-as-a-service or similar as well but I don't know if I see them for individual stuff.
100x is probably hyperbole. 37 signals saved between 50 and 66% in hosting costs when moving from cloud to self hosted.
https://basecamp.com/cloud-exit
But they have scale. A small company will save less because it’s not that much more work to handle say a 100 node kubernetes cluster vs a 10 node kubernetes cluster.
Probably worth pointing out that the Cloudflare Workers runtime is already open source: https://github.com/cloudflare/workerd
True, workerd is open source. But the bindings (KV, R2, D1, Queues, etc.) aren't – they're Cloudflare's proprietary services. OpenWorkers includes open source bindings you can self-host.
I tried to run it locally some time ago, but it's buggy as hell when self-hosted. It's not even worth trying out given that CF itself doesn't suggest it.
> so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT
How is the cost of NAT free?
> Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
I don't doubt clouds are expensive, but in many countries it'd cost more to DIY for a proper business. Running a service isn't just running the install command. Having a team to maintain and monitor services is already expensive.
They said “charging more than free” - i.e., more than $0, i.e., they’re not free. It was awkwardly worded.
They said "instead of charging more than free", which means should be free.
Please read it again.
I think we’re in violent agreement, but you were ambiguous about what “cost” meant. It seems you meant “cost of providing NAT” but I interpreted it as “cost to the customer.”
I've been working on this for some time now, starting with vm2, then deno-core for 2 years, and recently rewrote it on rusty_v8 with Claude's help.
The problem with sandboxing solutions is that they have to provide very solid guarantees that code can't escape the sandbox, which is really difficult to do.
Any time I'm evaluating a sandbox that's what I want to see: evidence that it's been robustly tested against all manner of potential attacks, accompanied by detailed documentation to help me understand how it protects against them.
This level of documentation is rare! I'm not sure I can point to an example that feels good to me.
So the next thing I look for is evidence that the solution is being used in production by a company large enough to have a dedicated security team maintaining it, and with real money on the line for if the system breaks.
Yes, exactly. The other reason Cloudflare workers runtime is secure is that they are incredibly active at keeping it patched and up to date with V8 main. It's often ahead of Chrome in adopting V8 releases.
I didn’t know this, but there are also security downsides to being ahead of chrome — namely, all chrome releases take dependencies on “known good” v8 release versions which have at least passed normal tests and minimal fuzzing, but also v8 releases go through much more public review and fuzzing by the time they reach chrome stable channel. I expect if you want to be as secure as possible, you’d want to stay aligned with “whatever v8 is in chrome stable.”
Fair point. The V8 isolate provides memory isolation, and we enforce CPU limits (100ms) and memory caps (128MB). Workers run in separate isolates, not separate processes, so it's similar to Cloudflare's model. That said, for truly untrusted third-party code, I'd recommend running the whole thing in a container/VM as an extra layer. The sandboxing is more about resource isolation than security-grade multi-tenancy.
I think you should consider adjusting the marketing to reflect this. "untrusted JavaScript" -> "JavaScript", "Secure sandboxing with CPU (100ms) and memory (128MB) limits per worker" -> "Sandboxing with CPU (100ms) and memory (128MB) limits per worker", overhauling https://openworkers.com/docs/architecture/security.
Over promising on security hurts the credibility of the entire project - and the main use case for this project is probably executing trusted code in a self hosted environment not "execut[ing] untrusted code in a multi-tenant environment".
Great point, thanks. Just updated the site – removed "untrusted" and "secure", added a note clarifying the threat model
I don't think what you want us even possible. How would such guarantees even look like? "Hello, we are a serious cybersec firm and we have evaluated the code and it's pretty sound, trust us!"?
"Hello, we are a serious cybersec firm and we have evaluated the code and here are our test with results that proof that we didn't find anything, the code is sound; Have we been through? We have, trust us!"
In terms of a one off product without active support - the only thing I can really imagine is a significant use of formal methods to prove correctness of the entire runtime. Which is of course entirely impractical given the state of the technology today.
Realistically security these days is an ongoing process, not a one off, compare to cloudflare's security page: https://developers.cloudflare.com/workers/reference/security... (to be clear when I use the pronoun "we" I'm paraphrasing and not personally employed by cloudflare/part of this at all)
- Implicit/from other pieces of marketing: We're a reputably company with these other big reputable companies who care about security and are juicy targets for attacks using this product.
- We update V8 within 24 hours of a security update, compared to weeks for the big juicy target of Google Chrome.
- We use various additional sandboxing techniques on top of V8, including the complete lack of high precision timers, and various OS level sandboxing techniques.
- We detect code doing strange things and move it out of the multi-tennant environment into an isolated one just in case.
- We detect code using APIs that increase the surface area (like debuggers) and move it out of the multi-tennant environment into an isolated on just in case.
- We will keep investing in security going forwards.
Running secure multi-tenant environments is not an easy problem. It seems unlikely that it's possible for a typical open source project (typical in terms of limited staffing, usually including a complete lack of on-call staff) to release software to do so today.
Agreed. Cloudflare has dedicated security teams, 24h V8 patches, and years of hardening – I can't compete with that. The realistic use case for OpenWorkers is running your own code on your own infra, not multi-tenant SaaS. I will update the docs to reflect this.
That's the problem! It's really hard to find trustworthy sandboxing solutions, I've been looking for a long time. It's kind of my white whale.
I think this is, sandboxed so your debugging didn't need to consider interactions, not sandboxes so you can run untrusted code.
Since it’s self hosted the sandboxing aspect at the language/runtime level probably matters just a little bit less.
Not if you're self-hosting and running your own trusted code, you don't. I care about resource isolation, not security isolation, between my own services.
Completely agree. There are some apps that unfortunately need to care about some level of security isolation, but with an open workers they could just put those specific workers on their own isolated instance.
Perhaps it might be helpful to some to also lay out the things that don't work today (or eg roadmap of what's being worked on that doesn't currently work?). Anyway, looks very cool!
Good idea! Main things not yet implemented: Durable Objects, WebSockets, HTMLRewriter, and cache API. Next priority is execution recording/replay for debugging. I'll add a roadmap section to the docs.
Could you add a kubernetes deployment quick-start? Just a simple deployment.yaml is enough.
Cool project, great work!
Forgive the uninformed questions, but given that `workerd` (https://github.com/cloudflare/workerd) is "open-source" (in terms of the runtime itself, less so the deployment model), is the main distinction here that OpenWorkers provides a complete environment? Any notable differences between the respective runtimes themselves? Is the intention to ever provide a managed offering for scalability/enterprise features, or primarily focus on enabling self-hosting for DIYers?
Thanks! Main differences: 1. Complete stack: workerd is just the runtime. OpenWorkers includes the full platform – dashboard, API, scheduler, logs, and self-hostable bindings (KV, S3/R2, Postgres). 2. Runtime: workerd uses Cloudflare's C++ codebase, OpenWorkers is Rust + rusty_v8. Simpler, easier to hack on. 3. Managed offering: Yes, there's already one at dash.openworkers.com – free tier available. But self-hosting is a first-class citizen.
This is similar to what rivet (1) does, perhaps focusing more on stateless than rivet does
(1) https://www.rivet.dev/docs/actors/
I wonder why V8 is considered as superior compared to WASM for sandboxing.
On V8, you can run both JavaScript and WASM.
Theoretically yes, but CF workers or this project doesn't support it. Indeed none of the cloud providers support WASM as first-party support yet.
The problem is that there’s not much of a market opportunity yet. Customers aren’t voting for WASM with their wallets like they are mainstream language runtimes.
Cool project, but I never found the cloudflare DX desirable compared to self hosted alternatives. A plain old node server in a docker container was much easier to manage, use and is scalable. Cloudflare's system was just a hoop that you needed to jump through to get to the other nice to haves in their cloud.
Would it be useful for testing apps that you're going to deploy on Cloudflare anyway?
This is super nice! Thank you for working on this!
Recently really enjoying CloudFlare Workflows (used it in https://mafia-arena.com) and would be nice to build Workflows on top of this too.
This is very nice! Do you plan to hook this up to GitHub, so that a push of worker code (and maybe a yaml describing the environment & resources) will result in a redeploy?
Not yet, but it's one of the next big features. I'm currently working on the CLI (WIP), and GitHub integration with auto-deploy on push will come after that. A yaml config for bindings/cron is definitely on the roadmap too.
I'm also working on execution recording/replay – the idea is to capture a deterministic trace of a request, so you can push it as a GitHub issue and replay it locally (or let an AI debug it).
Does this actually use the cloudflare worker runtime or is this just a way to run code in v8 isolates?
It's a custom V8 runtime built with rusty_v8, not the actual Cloudflare runtime (github.com/openworkers/openworkers-runtime-v8). The goal is API compatibility – same Worker syntax (fetch handler, Request/Response, etc.) so you can migrate code easily. Under the hood it's completely independent.
Interesting option to consider next to openfaas