SSE is way simpler and doesn’t need a separate port. It’s basically the server speaking in bursts on a HTTP response that stays open for a very long time.
The small disadvantage is that it’s one-way, and if you want to send something to the server you have to use HTTP. However, with http/2 or with keepalives that’s not a big problem.
Not knowing the internals, would it be hard to implement this myself?
y-websocket is not that big (the client is around 500 lines of code, the server around 400), so it wouldn’t be unfeasible to port it to a different communication channel. It uses a binary encoding protocol that should work over any communication protocol as well.
However, I would be concerned about the send times to the server. Updates on one client would be delayed getting to the server and consequently to other clients, so it wouldn’t really be real-time.
I believe the send times will be fine, really. When using http/2 (which you should) or keepalive, you already have the socket open, and then the request should not be that much slower than sending it over websocket.
Receiving is just plain HTTP so it should be on par with websocket.
For me the big thing is not having the headache of adding a websocket server to my deploys. It also makes cloud deploys simpler.
Indeed, the source looks quite straightforward. SSE does use text strings, so perhaps the encoding isn’t an option.
I might take a stab at implementing it at some point.
SSE is not much lightweighter than ws, as both keeps a connection open, so if a load balancer can handle 10k connection, both counted as one and use the limit. Also ws can use http/2 as well