At my company, we have needed to extend the existing y-websocket server. So, after much research, we developed a new implementation that is fully compatible with the current y-websocket client.
It is fully tested, supports some authentication scenarios, and document persistence.
Github: GitHub - closeally/yjs-server
NPM: yjs-server - npm
Thanks @dmonad for all your hard work!
This is very interesting, I’ve been looking for something like this myself. There is a scalable version of the y-websocket server here: GitHub - kapv89/yjs-scalable-ws-backend but it doesn’t have a proper API and I find it rather confusing. It does use Redis for distribution over multiple servers, but for my use case I don’t think that is necessary currently.
How would one go about implementing a more fine-grained authorization mechanism? For example, there might be some properties of a document that require special permission to edit that not all users possess. In that case even if the user is authenticated, they might not be allowed to send certain updates to the server.
I did take a look at GitHub - kapv89/yjs-scalable-ws-backend. It works great!, however, the memory requirements in Redis (stores all updates in a queue) and the fact that it stores every update (basically every keystroke) in the primary db (Postgres) made it a non-starter for our current use case.
We want to eventually enable the ability to horizontal scale, I am doing more research into how to better support this. One idea we had, was to not use Redis at all for the pub sub and actually connect each server to each other. A simple signaling server might be needed for server discovery (like the one used for y-webrtc) which can either be Redis or anything else (likely a user defined implemtable interface). If you think about it, each server already has a whole copy of each loaded document in memory, why do we need to duplicate all this in Redis?
Regardless, we want to implement a great API for this, so that folks that are using other scalable stragies like sharing don’t need to bother with redis/etc.
For your question on authorization. Based on what I have researched, there is no way to only allow some updates to go through. In our application we use yjs-server to implement a “fixer” process to try to correct any issues with the document, it runs at random intervals in the backend.
My recommendation would be to treat each document as an authorization boundary, meaning that you should split the document into many depending on who has access to it.
I’ve also considered omitting Redis as an intermediary signaling service. An older version of my project, which wasn’t using Yjs, used MongoDB’s change streams to propagate updates by reading the updates and distributing to clients directly. This way the persistence database also took on the role of distributing synchronization messages. However, this approach is probably not very efficient since it requires writing each individual update to the database and then having it sent back to the other server shards (or even the server that sent the update itself, depending on implementation).
The idea of connecting the server instances together directly, maybe by means of some web-based protocol(?) sounds good too. Then the database wouldn’t need to be involved at all. An important aspect to consider in this case is the network topology. You probably wouldn’t want to maintain a connection between each pair of servers in this situation but rather have messages forwarded. I can imagine the signaling for this kind of topology to be rather complicated, especially if it’s supposed to be highly reliable. For this reason, relying on an existing solution like the persistence database, be it MongoDB or PostgreSQL, or another special-case database like Redis, can simplify things somewhat. Looking at the code of the scalable yjs websocket backend, the pub/sub part of the code is quite simple. Though I still wouldn’t mind doing away with an extra service I have to deal with.
As for authorization, maybe putting a filter in the method that commits the changes to the database would work? A kind of filter that only lets through authorized changes would maybe work. Though defining and performing this kind of filter does appear to be somewhat complicated, especially if the document is difficult to inspect and the authorization rules are complicated.
The use case for this is that some settings of a document may only be changed with special permissions and regular users shouldn’t be able to change this setting. For example, documents could be set to become read-only in which case any update by a basic user has to be discarded. Furthermore, the readonly setting itself may only be changed by users with the necessary permissions, even when the document is in read/write mode. I could implement this by creating a separate privileged document, but that still wouldn’t solve the issue of making it temporarily read-only.
Another aspect I’ve been contemplating is how Yjs can be used to serve a stream of updates without making changes. Some documents that are restricted to certain users may still be viewed by the public and they should get live updates on this without being allowed to make changes. I can make the interface disallow changes, but the server should also verify this on its own.
An interesting part of Yjs that I only discovered recently is that subdocuments, that is instances of Y.Doc nested within other Y.Doc objects, can be created and lazily loaded. Does your server implementation handle this case? (if there is even any special handling to do) In another post on this forum: Extend y-websocket provider to support sub docs synchronization in one websocket connection - #4 by douira support for subdocuments in y-websocket was implemented. If I want to allow the same website to show multiple documents without needing to open a new websocket connection for each one, this is important. I’m not sure how this would be persisted in the database though and how it would work with your server layer. They do modify the server provided by y-websocket, so some changes would probably be necessary to make subdocuments work.
Importantly, each document needs its own separated awareness instance, even if the user isn’t editing multiple documents at the same time. Things like collaborative cursors use the awareness instance to synchronize and sharing this across all (potentially many) documents would be highly inefficient. Things like showing the participants of a particular or multiple documents also wouldn’t work if the awareness instance was globally shared.
Was randomly browsing the forums when I saw my repo mentioned here ( ) … I recently pushed another update in a branch - external_api_persistence … I have extracted the logic of persistence in a few api-calls, and also provided logic for read-only/read+write access. Might be helpful.
API can handle the logic of persistence any way you see fit. Like this branch was made because some German scientist wanted to persist his documents to S3, and use redis as the primary data-source of documents that are accessed.