Atomic reads and writes by doc?


We have been looking at different persistence libraries for YJS.

We have noticed that all of the libraries have atomic transactions in a mutex or similar scoped at the server level.

Is there any reason these are not document scoped? (i.e: Mutex per docName) We are noticing on a busy server that all clients are in the same mutex queue…



this.updateHandler = update => {
      // mux: only store update in redis if this document update does not originate from redis
      this.mux(() => {
        rp.redis.rpushBuffer(name + ':updates', Buffer.from(update)).then(len => {
          if (len === this._clock + 1) {
            if (this._fetchingClock < this._clock) {
              this._fetchingClock = this._clock
          // @ts-ignore
          rp.redis.publish(, len.toString())



this._transact = f => {
      const currTr = = (async () => {
        await currTr
        let res = /** @type {any} */ (null)
        try {
          res = await f(db)
        } catch (err) {
          console.warn('Error during saving transaction', err)
        return res



storeUpdate: (docName, update, outsideTransactionQueue) => {
      const callback = (db, config) => storeUpdate(db, config.tableName, docName, update)

      return outsideTransactionQueue ? callback(db, config) : transact(callback)

We are considering changing the mutex to be document scoped (See below).

Is there any reason to avoid this…?

constructor(location, collection) {
    this.db = new Adapter(location, collection); = {};

    this._transact = (docName, f) => {
      if (![docName]) {[docName] = promise.resolve();

      const currTr =[docName];[docName] = (async () => {
        await currTr;

        let res = /** @type {any} */ (null);
        try {
          res = await f(this.db);
        } catch (err) {
          console.warn('Error during saving transaction', err);
        return res;

Hi @jstleger0,

The “mutex” in y-redis is just used to distinguish local and remote changes. Basically, I don’t want to publish a message to redis when I receive the very same message already from redis. This would result in an infinite loop, or at least degraded performance.

I can’t talk about the other providers. It seems that they implement a queue to avoid concurrent writes to the database. Unless I understand something, I think they could all run concurrently.

1 Like

“mutex” is more like a fun analogy. Since JavaScript is single-threaded, there can’t be any concurrency. They just prevent the same thread to run into the protected code twice.

1 Like

Hey @dmonad

That’s great thanks! We will roll forward with scoping those transactions to a docName.

We currently have a stateful server with session affinity and we are not using y-redis so we should be okay to move forward.

1 Like