Bring y-websocket to Cloudflare workers

I want to deploy y-websocket to Cloudflare workers.

Normally I would start it locally via

HOST=localhost PORT=1234 npx y-websocket

but how would I do it here? There is a documentation for it, but I honestly have no idea how to implement it such that y-websockets runs there. Hence, it would be great if you could help me here.

y-websocket is just a small node script, it should be easy to deploy.

However, y-websocket (unless you extend it), expects to be the only server that serves Yjs documents to clients.

Cloudflare workers are ephemeral. Also Cloudflare will probably spawn multiple of them in different regions to serve clients. You need some way to persist data and have the servers share state (i.e. the current state of the Yjs document). Maybe you can use Cloudflare’s persistent state for that?

It is probably easier to start with spawing a y-websocket server on a single VM instance.

1 Like

Many thanks for getting back to me @dmonad .

Assume we have a very large number of quill documents that are getting edited at the same time. Is there some preferred solution to set up for such a scenario that the users can work on the quill documents collaboratively?

One interesting approach here is to use CloudFlare Durable Objects as the durable backing store with an at-most-one-instance guarantee: Using Durable Objects · Cloudflare Workers docs

Someone released some (work-in-progress?) code to prototype this: GitHub - threepointone/y-workers: A Yjs backend on Cloudflare Workers

No affiliation, just was searching this forum for CloudFlare Workers to see if there was discussion :smile: and wanted to share that link I’d come across.

@jasonm Many thanks for getting back to me. I also stumbled across this very prototype, but I do thing it is not quite there yet.

I actually thought of using this Github repo and adapt it, such that one might use it for Workers with Durable Objects.

What do you think about such an approach?

Hey @junioriosity - interesting, I hadn’t seen that repo yet. I haven’t read the code in depth so I don’t know if it would provide horizontal scalability in the sense of adding capacity (e.g. increasing maximum concurrency). I also don’t yet understand the purpose of redis queues in that library; given how the queue key is constructed in getDocUpdatesKey (src/redis.ts) it looks like the queue key depends only on the doc ID, and does not vary per server, so I don’t see how this would provide a sort of backpressure etc… yeah, I don’t really get what this library is trying to do with the queues.

For the library’s usage of redis pubsub, it appears to provide similar functionality to Extensions – Tiptap since each message is replicated to every attached yjs websocket server, and there is no sharding/routing, then all the servers should see same/similar CPU and RAM usage and the library does not appear to provide any sort of performance scalability (i.e. “add more servers to handle more concurrent docs/users”). It does provide fault-tolerance/high-availability (HA) benefits: if one WS server goes down, other(s) can continue to accept updates. Now redis-server is the single point of failure unless you use redis-cluster or something similar… but still, this is all about HA rather than perf/max concurrency.

Generally I agree with @dmonad’s advice:

It is probably easier to start with spawing a y-websocket server on a single VM instance.

until you’ve run some load testing to understand where there is a bottleneck for your particular app’s usage patterns. You can vertically scale up a single server quite a lot.

If I were designing a system to scale out to accept a large number of concurrent users, I’d probably do some kind of sharding/routing system. I don’t know of a Yjs websocket library offhand that does this, although there may be one. There are also the serverless approaches, e.g. Y-serverless: AWS Lambda + DynamoDB to use as a provider - I haven’t read those very carefully to understand what tradeoffs they bring.

Hope this helps - good luck with your application!

Also I see that the author of that library is here on the forums - perhaps @kapv89 can provide a more useful answer than I could with my hasty reading of his library :smiling_face:

hey…

  1. purpose of redis queues is to provide a temp storage for most recent updates of a document so that when a new user joins a document which is being actively edited by other users, the new user gets the most fresh state of the document (as persisting updates in the db can take some time)
  2. regarding the use of redis pubsub, the sharding happens by documentId … can go deeper into this if needed, but you are welcome to go through the code. Essentially, all servers that are powering the same collaborative session need to have their own copy of the document under collaborative editing. And these copies need to be kept in sync via pubsub. This redis pubsub can easily be replaced with something like gcp cloudpubsub if you have enough scale that cloudpubsub will give you good latencies.
  3. regarding “this is all about HA rather than perf/max concurrency.” … not really … a ver simplified version of how horizontal scaling happens is that the browser sends a websocket request, and ECS(or whatever you are using) routes that request to a node that has less load. suppose you have 10000k-ish collaborative editing sessions going on on 100-ish documents. Consider document #56 out of those 100 documents. A new user joins the collaborative editing session of doc#56 - either they connect to a node which already has doc#56 loaded in memory, or they connect to a new node which doesn’t have doc#56 … if they connect to a new node, then that node loads up doc#56 in memory and keeps it in sync with other nodes which are serving doc#56 via redis pubsub, and keeps the browser in sync with everything using websockets