Server-Sent Events: the alternative to WebSockets you should be using

Cover image

When developing real-time web applications, WebSockets might be the first thing that come to your mind. However, Server Sent Events (SSE) are a simpler alternative that is often superior.

Contents

  1. Prologue
  2. WebSockets?
  3. What is wrong with WebSockets
    1. Compression
    2. Multiplexing
    3. Issues with proxies
    4. Cross-Site WebSocket Hijacking
  4. Server-Sent Events
  5. Let’s write some code
    1. The Reverse-Proxy
    2. The Frontend
    3. The Backend
  6. Bonus: Cool SSE features
  7. Conclusion

Prologue

Recently I have been curious about the best way to implement a real-time web application. That is, an application containing one ore more components which automatically update, in real-time, reacting to some external event. The most common example of such an application, would be a messaging service, where we want every message to be immediately broadcasted to everyone that is connected, without requiring any user interaction.

After some research I stumbled upon an amazing talk by Martin Chaov, which compares Server Sent Events, WebSockets and Long Polling. The talk, which is also available as a blog post, is entertaining and very informative. I really recommend it. However, it is from 2018 and some small things have changed, so I decided to write this article.

WebSockets?

WebSockets enable the creation of two-way low-latency communication channels between the browser and a server.

This makes them ideal in certain scenarios, like multiplayer games, where the communication is two-way, in the sense that both the browser and server send messages on the channel all the time, and it is required that these messages be delivered with low latency.

In a First-Person Shooter, the browser could be continuously streaming the player’s position, while simoultaneously receiving updates on the location of all the other players from the server. Moreover, we definitely want these messages to be delivered with as little overhead as possible, to avoid the game feeling sluggish.

This is the opposite of the traditional request-response model of HTTP, where the browser is always the one initiating the communication, and each message has a significant overhead, due to establishing TCP connections and HTTP headers.

However, many applications do not have requirements this strict. Even among real-time applications, the data flow is usually asymmetric: the server sends the majority of the messages while the client mostly just listens and only once in a while sends some updates. For example, in a chat application an user may be connected to many rooms each with tens or hundreds of participants. Thus, the volume of messages received far exceeds the one of messages sent.

What is wrong with WebSockets

Two-way channels and low latency are extremely good features. Why bother looking further?

WebSockets have one major drawback: they do not work on top of HTTP, at least not fully. They require their own TCP connection. They use HTTP only to establish the connection, but then upgrade it to a standalone TCP connection on top of which the WebSocket protocol can be used.

This may not seem a big deal, however it means that WebSockets cannot benefit from any HTTP feature. That is:

  • No support for compression
  • No support for HTTP/2 multiplexing
  • Potential issues with proxies
  • No protection from Cross-Site Hijacking

At least, this was the situation when the WebSocket protocol was first released. Nowadays, there are some complementary standards that try to improve upon this situation. Let’s take a closer look to the current situation.

Note: If you do not care about the details, feel free to skip the rest of this section and jump directly to Server-Sent Events or the demo.

Compression

On standard connections, HTTP compression is supported by every browser, and is super easy to enable server-side. Just flip a switch in your reverse-proxy of choice. With WebSockets the question is more complex, because there are no requests and responses, but one needs to compress the individual WebSocket frames.

RFC 7692, released on December 2015, tries to improve the situation by definining “Compression Extensions for WebSocket”. However, to the best of my knowledge, no popular reverse-proxy (e.g. nginx, caddy) implements this, making it impossible to have compression enabled transparently.

This means that if you want compression, it has to be implemented directly in your backend. Luckily, I was able to find some libraries supporting RFC 7692. For example, the websockets and wsproto Python libraries, and the ws library for nodejs.

However, the latter suggests not to use the feature:

The extension is disabled by default on the server and enabled by default on the client. It adds a significant overhead in terms of performance and memory consumption so we suggest to enable it only if it is really needed.

Note that Node.js has a variety of issues with high-performance compression, where increased concurrency, especially on Linux, can lead to catastrophic memory fragmentation and slow performance.

On the browsers side, Firefox supports WebSocket compression since version 37. Chrome supports it as well. However, apparently Safari and Edge do not.

I did not take the time to verify what is the situation on the mobile landscape.

Multiplexing

HTTP/2 introduced support for multiplexing, meaning that multiple request/response pairs to the same host no longer require separate TCP connections. Instead, they all share the same TCP connection, each operating on its own independent HTTP/2 stream.

This is, again, supported by every browser and is very easy to transparently enable on most reverse-proxies.

On the contrary, the WebSocket protocol has no support, by default, for multiplexing. Multiple WebSockets to the same host will each open their own separate TCP connection. If you want to have two separate WebSocket endpoints share their underlying connection you must add multiplexing in your application’s code.

RFC 8441, released on September 2018, tries to fix this limitation by adding support for “Bootstrapping WebSockets with HTTP/2”. It has been implemented in Firefox and Chrome. However, as far as I know, no major reverse-proxy implements it. Unfortunately, I could not find any implementation in Python or Javascript either.

Issues with proxies

HTTP proxies without explicit support for WebSockets can prevent unencrypted WebSocket connections to work. This is because the proxy will not be able to parse the WebSocket frames and close the connection.

However, WebSocket connections happening over HTTPS should be unaffected by this problem, since the frames will be encrypted and the proxy should just forward everything without closing the connection.

To learn more, see “How HTML5 Web Sockets Interact With Proxy Servers” by Peter Lubbers.

Cross-Site WebSocket Hijacking

WebSocket connections are not protected by the same-origin policy. This makes them vulnerable to Cross-Site WebSocket Hijacking.

Therefore, WebSocket backends must check the correctness of the Origin header, if they use any kind of client-cached authentication, such as cookies or HTTP authentication.

I will not go into the details here, but consider this short example. Assume a Bitcoin Exchange uses WebSockets to provide its trading service. When you log in, the Exchange might set a cookie to keep your session active for a given period of time. Now, all an attacker has to do to steal your precious Bitcoins is make you visit a site under her control, and simply open a WebSocket connection to the Exchange. The malicious connection is going to be automatically authenticated. That is, unless the Exchange checks the Origin header and blocks the connections coming from unauthorized domains.

I encourage you to check out the great article about Cross-Site WebSocket Hijacking by Christian Schneider, to learn more.

Server-Sent Events

Now that we know a bit more about WebSockets, including their advantages and shortcomings, let us learn about Server-Sent Events and find out if they are a valid alternative.

Server-Sent Events enable the server to send low-latency push events to the client, at any time. They use a very simple protocol that is part of the HTML Standard and supported by every browser.

Unlike WebSockets, Server-sent Events flow only one way: from the server to the client. This makes them unsuitable for a very specific set of applications, that is, those that require a communication channel that is both two-way and low latency, like real-time games. However, this trade-off is also their major advantage over WebSockets, because being one-way, Server-Sent Events work seamlessly on top of HTTP, without requiring a custom protocol. This gives them automatic access to all of HTTP’s features, such as compression or HTTP/2 multiplexing, making them a very convenient choice for the majority of real-time applications, where the bulk of the data is sent from the server, and where a little overhead in requests, due to HTTP headers, is acceptable.

The protocol is very simple. It uses the text/event-stream Content-Type and messages of the form:

data: First message

event: join
data: Second message. It has two
data: lines, a custom event type and an id.
id: 5

: comment. Can be used as keep-alive

data: Third message. I do not have more data.
data: Please retry later.
retry: 10

Each event is separated by two empty lines (\n) and consists of various optional fields.

The data field, which can be repeted to denote multiple lines in the message, is unsurprisingly used for the content of the event.

The event field allows to specify custom event types, which as we will show in the next section, can be used to fire different event handlers on the client.

The other two fields, id and retry, are used to configure the behaviour of the automatic reconnection mechanism. This is one of the most interesting features of Server-Sent Events. It ensures that when the connection is dropped or closed by the server, the client will automatically try to reconnect, without any user intervention.

The retry field is used to specify the minimum amount of time, in seconds, to wait before trying to reconnect. It can also be sent by a server, immediately before closing the client’s connection, to reduce its load when too many clients are connected.

The id field associates an identifier with the current event. When reconnecting the client will transmit to the server the last seen id, using the Last-Event-ID HTTP header. This allows the stream to be resumed from the correct point.

Finally, the server can stop the automatic reconnection mechanism altogether by returning an HTTP 204 No Content response.

Let’s write some code!

Let us now put into practice what we learned. In this section we will implement a simple service both with Server-Sent Events and WebSockets. This should enable us to compare the two technologies. We will find out how easy it is to get started with each one, and verify by hand the features discussed in the previous sections.

We are going to use Python for the backend, Caddy as a reverse-proxy and of course a couple of lines of JavaScript for the frontend.

To make our example as simple as possible, our backend is just going to consist of two endpoints, each streaming a unique sequence of random numbers. They are going to be reachable from /sse1 and /sse2 for Server-Sent Events, and from /ws1 and /ws2 for WebSockets. While our frontend is going to consist of a single index.html file, with some JavaScript which will let us start and stop WebSockets and Server-Sent Events connections.

The code of this example is available on GitHub.

The Reverse-Proxy

Using a reverse-proxy, such as Caddy or nginx, is very useful, even in a small example such as this one. It gives us very easy access to many features that our backend of choice may lack.

More specifically, it allows us to easily serve static files and automatically compress HTTP responses; to provide support for HTTP/2, letting us benefit from multiplexing, even if our backend only supports HTTP/1; and finally to do load balancing.

I chose Caddy because it automatically manages for us HTTPS certificates, letting us skip a very boring task, especially for a quick experiment.

The basic configuration, which resides in a Caddyfile at the root of our project, looks something like this:

localhost

bind 127.0.0.1 ::1

root ./static
file_server browse

encode zstd gzip

This instructs Caddy to listen on the local interface on ports 80 and 443, enabling support for HTTPS and generating a self-signed certificate. It also enables compression and serving static files from the static directory.

As the last step we need to ask Caddy to proxy our backend services. Server-Sent Events is just regular HTTP, so nothing special here:

reverse_proxy /sse1 127.0.1.1:6001
reverse_proxy /sse2 127.0.1.1:6002

To proxy WebSockets our reverse-proxy needs to have explicit support for it. Luckily, Caddy can handle this without problems, even though the configuration is slighly more verbose:

@websockets {
    header Connection *Upgrade*
    header Upgrade    websocket
}

handle /ws1 {
    reverse_proxy @websockets 127.0.1.1:6001
}

handle /ws2 {
    reverse_proxy @websockets 127.0.1.1:6002
}

Finally you should start Caddy with

$ sudo caddy start

The Frontend

Let us start with the frontend, by comparing the JavaScript APIs of WebSockets and Server-Sent Events.

The WebSocket JavaScript API is very simple to use. First, we need to create a new WebSocket object passing the URL of the server. Here wss indicates that the connection is to happen over HTTPS. As mentioned above it is really recommended to use HTTPS to avoid issues with proxies.

Then, we should listen to some of the possible events (i.e. open, message, close, error), by either setting the on$event property or by using addEventListener().

const ws = new WebSocket("wss://localhost/ws");

ws.onopen = e => console.log("WebSocket open");

ws.addEventListener(
  "message", e => console.log(e.data));

The JavaScript API for Server-Sent Events is very similar. It requires us to create a new EventSource object passing the URL of the server, and then allows us to subscribe to the events in the same way as before.

The main difference is that we can also subscribe to custom events.

const es = new EventSource("https://localhost/sse");

es.onopen = e => console.log("EventSource open");

es.addEventListener(
  "message", e => console.log(e.data));

// Event listener for custom event
es.addEventListener(
  "join", e => console.log(`${e.data} joined`))

We can now use all this freshly aquired knowledge about JS APIs to build our actual frontend.

To keep things as simple as possible, it is going to consist of only one index.html file, with a bunch of buttons that will let us start and stop our WebSockets and EventSources. Like so

<button onclick="startWS(1)">Start WS1</button>
<button onclick="closeWS(1)">Close WS1</button>
<br>
<button onclick="startWS(2)">Start WS2</button>
<button onclick="closeWS(2)">Close WS2</button>

We want more than one WebSocket/EventSource so we can test if HTTP/2 multiplexing works and how many connections are open.

Now let us implement the two functions needed by those buttons to work:

const wss = [];

function startWS(i) {
  if (wss[i] !== undefined) return;

  const ws = wss[i] = new WebSocket("wss://localhost/ws"+i);
  ws.onopen = e => console.log("WS open");
  ws.onmessage = e => console.log(e.data);
  ws.onclose = e => closeWS(i);
}

function closeWS(i) {
  if (wss[i] !== undefined) {
    console.log("Closing websocket");
    websockets[i].close();
    delete websockets[i];
  }
}

The frontend code for Server-Sent Events is almost identical. The only difference is the onerror event handler, which is there because in case of error a message is logged and the browser will attempt to reconnect.

const ess = [];

function startES(i) {
  if (ess[i] !== undefined) return;

  const es = ess[i] = new EventSource("https://localhost/sse"+i);
  es.onopen = e => console.log("ES open");
  es.onerror = e => console.log("ES error", e);
  es.onmessage = e => console.log(e.data);
}

function closeES(i) {
  if (ess[i] !== undefined) {
    console.log("Closing EventSource");
    ess[i].close()
    delete ess[i]
  }
}

The Backend

To write our backend, we are going to use Starlette, a simple async web framework for Python, and Uvicorn as the server. Moreover, to make things modular, we are going to separate the data-generating process, from the implementation of the endpoints.

We want each of the two endpoints to generate an unique random sequence of numbers. To accomplish this we will use the stream id (i.e. 1 or 2) as part of the random seed.

Ideally, we would also like our streams to be resumable. That is, a client should be able to resume the stream from the last message it received, in case the connection is dropped, instead or re-reading the whole sequence. To make this possible we will assign an ID to each message/event, and use it to initialize the random seed, together with the stream id, before each message is generated. In our case, the ID is just going to be a counter starting from 0.

With all that said, we are ready to write the get_data function which is responsible to generate our random numbers:

import random

def get_data(stream_id: int, event_id: int) -> int:
    rnd = random.Random()
    rnd.seed(stream_id * event_id)
    return rnd.randrange(1000)

Let’s now write the actual endpoints.

Getting started with Starlette is very simple. We just need to initialize an app and then register some routes:

from starlette.applications import Starlette

app = Starlette()

To write a WebSocket service both our web server and framework of choice must have explicit support. Luckily Uvicorn and Starlette are up to the task, and writing a WebSocket endpoint is as convenient as writing a normal route.

This all the code that we need:

from websockets.exceptions import WebSocketException

@app.websocket_route("/ws{id:int}")
async def websocket_endpoint(ws):
    id = ws.path_params["id"]
    try:
        await ws.accept()

        for i in itertools.count():
            data = {"id": i, "msg": get_data(id, i)}
            await ws.send_json(data)
            await asyncio.sleep(1)
    except WebSocketException:
        print("client disconnected")

The code above will make sure our websocket_endpoint function is called every time a browser requests a path starting with /ws and followed by a number (e.g. /ws1, /ws2).

Then, for every matching request, it will wait for a WebSocket connection to be established and subsequently start an infinite loop sending random numbers, encoded as a JSON payload, every second.

For Server-Sent Events the code is very similar, except that no special framework support is needed. In this case, we register a route matching URLs starting with /sse and ending with a number (e.g. /sse1, /sse2). However, this time our endpoint just sets the appropriate headers and returns a StreamingResponse:

from starlette.responses import StreamingResponse

@app.route("/sse{id:int}")
async def sse_endpoint(req):
    return StreamingResponse(
        sse_generator(req),
        headers={
            "Content-type": "text/event-stream",
            "Cache-Control": "no-cache",
            "Connection": "keep-alive",
        },
    )

StreamingResponse is an utility class, provided by Starlette, which takes a generator and streams its output to the client, keeping the connection open.

The code of sse_generator is shown below, and is almost identical to the WebSocket endpoint, except that messages are encoded according to the Server-Sent Events protocol:

async def sse_generator(req):
    id = req.path_params["id"]
    for i in itertools.count():
        data = get_data(id, i)
        data = b"id: %d\ndata: %d\n\n" % (i, data)
        yield data
        await asyncio.sleep(1)

We are done!

Finally, assuming we put all our code in a file named server.py, we can start our backend endpoints using Uvicorn, like so:

$ uvicorn --host 127.0.1.1 --port 6001 server:app &
$ uvicorn --host 127.0.1.1 --port 6002 server:app &

Bonus: Cool SSE features

Ok, let us now conclude by showing how easy it is to implement all those nice features we bragged about earlier.

Compression can be enabled by changing just a few lines in our endpoint:

@@ -32,10 +33,12 @@ async def websocket_endpoint(ws):
 
 async def sse_generator(req):
     id = req.path_params["id"]
+    stream = zlib.compressobj()
     for i in itertools.count():
         data = get_data(id, i)
         data = b"id: %d\ndata: %d\n\n" % (i, data)
-        yield data
+        yield stream.compress(data)
+        yield stream.flush(zlib.Z_SYNC_FLUSH)
         await asyncio.sleep(1)
 
 
@@ -47,5 +50,6 @@ async def sse_endpoint(req):
             "Content-type": "text/event-stream",
             "Cache-Control": "no-cache",
             "Connection": "keep-alive",
+            "Content-Encoding": "deflate",
         },
     )

We can then verify that everything is working as expected by checking the DevTools:

SSE Compression

Multiplexing is enabled by default since Caddy supports HTTP/2. We can confirm that the same connection is being used for all our SSE requests using the DevTools again:

SSE Multiplexing

Automatic reconnection on unexpected connection errors is as simple as reading the Last-Event-ID header in our backend code:

<     for i in itertools.count():
---
>     start = int(req.headers.get("last-event-id", 0))
>     for i in itertools.count(start):

Nothing has to be changed in the front-end code.

We can test that it is working by starting the connection to one of the SSE endpoints and then killing uvicorn. The connection will drop, but the browser will automatically try to reconnect. Thus, if we re-start the server, we will see the stream resume from where it left off!

Notice how the stream resumes from the message 243. Feels like magic 🔥

Prova

Conclusion

WebSockets are a big machinery built on top of HTTP and TCP to provide a set of extremely specific features, that is two-way and low latency communication.

In order to do that they introduce a number of complications, which end up making both client and server implementations more complicated than solutions based entirely on HTTP.

These complications and limitations have been addressed by new specs (RFC 7692, RFC 8441), and will slowly end up implemented in client and server libraries.

However, even in a world where WebSockets have no technical downsides, they will still be a fairly complex technology, involving a large amount of additional code both on clients and servers. Therefore, you should carefully consider if the addeded complexity is worth it, or if you can solve your problem with a much simpler solution, such as Server-Sent Events.


That’s all, folks! I hope you found this post interesting and maybe learned something new.

Feel free to check out the code of the demo on GitHub, if you want to experiment a bit with Server Sent Events and Websockets.

I also encourage you to read the spec, because it surprisingly clear and contains many examples.

Comments

You can comment this post on HN!
bullen on Feb 12, 2022 at 3:26 pm  [-]
I made the backend for this MMO on SSE over HTTP/1.1:

https://store.steampowered.com/app/486310/Meadow/

We have had a total of 350.000 players over 6 years and the backend out-scales all other multiplayer servers that exist and it's open source:

https://github.com/tinspin/fuse

You don't need HTTP/2 to make SSE work well. Actually the HTTP/2 TCP head-of-line issue and all the workarounds for that probably make it harder to scale without technical debt.


jayd16 on Feb 12, 2022 at 7:41 pm  [-]
>backend out-scales all other multiplayer servers

Can you explain what you mean here? What was your peak active user count, what was peak per server instance, and why you think that beats anything else?


rlabrecque on Feb 12, 2022 at 8:23 pm  [-]
Agreed, I'm curious as well. We load tested with real-clients faux-users, up to 1 million concurrent. And only stopped at 1 million because the test was becoming cost prohibitive.

bullen on Feb 13, 2022 at 12:06 am  [-]
The data is here: http://fuse.rupy.se/about.html

Under Performance. Per watt the fuse/rupy platform completely crushes all competition for real-time action MMOs because of 2 reasons:

- Event driven protocol design, averages at about 4 messages/player/second (means you cannot do spraying or headshots f.ex. which is another feature in my game design opinion).

- Java's memory model with atomic concurrency parallelism over shared memory which needs a VM and GC to work (C++ copied that memory model in C++11, but it failed completely because they lack both VM and GC, but that model is still to this day the one C++ uses), you can read more about this here: https://github.com/tinspin/rupy/wiki

These keep the internal latency of the server below maybe 100 microseconds at saturation, which no C++ server can handle even remotely, unless they copy Java's memory model and add a VM + GC so that all cores can work on the same memory at the same time without locking!

You can argue those points are bad arguments, but if you look at performance per watt with some consideration for developer friendlyness, I'm pretty sure in 100 years we will still be coding minimalist JavaSE (or some copy without Oracle) on the server and vanilla C (compiled with C++ compiler gcc/cl.exe) on the client to avoid cache misses.

Energy is everything!


lelanthran on Feb 13, 2022 at 9:15 pm  [-]
> - Java's memory model with atomic concurrency parallelism over shared memory which needs a VM and GC to work

Do you have a link that explains this bit?


bullen on Feb 13, 2022 at 9:58 pm  [-]
Not other than the one linked in the comment above. I have been reaching out to EVERYONE, and nobody can explain this to me, but I'll implement it myself soon so I can explain it.

lelanthran on Feb 13, 2022 at 10:59 pm  [-]
The links upthread don't actually explain why a VM + GC can do shared-memory concurrency faster[1].

I don't understand what particular piece of magic makes shared-memory concurrency under a VM+GC faster than a CAS implementation.

[1] I'm assuming a shared-memory threaded model of concurrency, not a shared-nothing message passing model of concurrency.


bullen on Feb 14, 2022 at 8:24 am  [-]
CAS?

Me neither, but I know it does in practice.

My intuition tells me the VM provides a layer decoupled from the hardware memory model so that there is less "friction" and the GC is required to reclaim shared memory that C++ would need to "stop the world" to reclaim anyhow! (all concurrent C++ objects leaks memory, see TBB concurrent_hash_map f.ex.) That means the code executes slower BUT the atomics can work better.

As I said; for 5 years I have been searching for answers from EVERYONE on the planet and nobody can answer. My guess is that this is so complicated, only a handfull can even begin to grook it, so nobody wants to explain it because it creates alot of wasted time.

The usual reaction is: Java is written in C, so how can Java be faster than C? Well I don't know how but I know it's true because I use it!

So my answer today is: Java is faster than C if you want to share memory between threads directly efficiently because you need a VM with GC to make the Java memory model (which everyone has copied so I guess it must be good?) work!

Here is someone who knows his concurrency and made C++ maps that might be better than TBB btw: https://github.com/preshing/junction

But no guarantees... you never get those with C/C++, I stopped downloading C/C++ code from the internet unless it has 100+ proved users! So stb/ttf and kuba/zip are my only dependencies.


lelanthran on Feb 14, 2022 at 9:09 am  [-]
> CAS?

https://en.wikipedia.org/wiki/Compare-and-swap

> My intuition tells me the VM provides a layer decoupled from the hardware memory model so that there is less "friction" and the GC is required to reclaim shared memory that C++ would need to "stop the world" to reclaim anyhow! (all concurrent C++ objects leaks memory, see TBB concurrent_hash_map f.ex.) That means the code executes slower BUT the atomics can work better.

I dunno about the GC bits; after all object pools are a thing in C++ so you have a consistent place (getting a new object) where reclamation of unused objects can be performed.

I think it might be down to mutex locking. In a native program, a failure to acquire the mutex causes a context-switch by performing a syscall (OS steps in, flushes registers, cache, everything, and runs some other thread).

In a VM language I would expect that a failure to acquire a mutex can be profiled by the VM with simple heuristics (Only one thread waiting for a mutex? Spin on the mutex until its released. More than five threads in the wait queue? Run some other thread).


azth on Feb 13, 2022 at 3:39 am  [-]
This is fantastic information.

HWR_14 on Feb 12, 2022 at 4:53 pm  [-]
Your license makes some sense, but it seems to include a variable perpetual subscription cost via gumroad. Without an account (assuming I found the right site), I have no idea what you would be asking for. I recommend making it a little clearer on the landing page.

That's said, it's very cool. Do you have a development blog for Meadow?


bullen on Feb 12, 2022 at 5:23 pm  [-]
Added link in the readme! Thx.

No, no dev log but I'll tell you some things that where incredible during that project:

- I started the fuse project 4 months before I set foot in the Meadow project office (we had like 3 meetings during those 4 months just to touch base on the vision)! This is a VERY good way of making things smooth, you need to give tech/backend/foundation people a head start of atleast 6 months in ANY project.

- We spent ONLY 6 weeks (!!!) implementing the entire games multiplayer features because I was 100% ready for the job after 4 months. Not a single hickup...

- Then for 7 months they finished the game client without me and released without ANY problems (I came back to the office that week and that's when I solved the anti-virus/proxy cacheing/buffering problem!).

I think Meadow is the only MMO in history so far to have ZERO breaking bugs on release (we just had one UTF-8 client bug that we patched after 15 minutes and nobody noticed except the poor person that put a strange character in their name).


Aeolun on Feb 12, 2022 at 6:03 pm  [-]
> MIT but [bunch of stuff]

Not MIT then. The beauty of MIT is that there is no stuff.


bullen on Feb 12, 2022 at 6:17 pm  [-]
We already discussed this in an earlier thread, and however bad this looks it's better than my own license.

Here it's clear, you can either use the code without money involved and then you have MIT (+ show logo and some example code is still mine).

If you want money then you have to share some of it.


dkersten on Feb 12, 2022 at 9:13 pm  [-]
While I get and support the intent, I don't like this usage of the name of the MIT license. I personally like the license because it tells me at a glance that I can use it for any purpose, commercial or otherwise, as long as the copyright and license is included and that there is no warranty. That's it, no complications, no other demands, no "if it makes X money or not", just I include the copyright and license terms and that's it, I can use the software whichever way I like.

Your license is not that. You have extra conditions that add complexity. I can no longer go "oh like MIT" and immediately use it for any purpose, because you require extras especially if I were to make money. That seems completely against the spirit of the simplicity of the MIT license which says you can do whatever you like, commercial or otherwise, as long as the copyright and license are included.

I think you should make your own license that includes the text of the MIT license, except removing the irrelevant parts (ie the commercial aspects include a caveat about requiring payment). You can still have a separate line of text explaining that the license is like the MIT license but with XYZ changes (basically the text you have now). But the license is not the MIT license and you should therefore have a separate license text that spells it out exactly. Not "its this, except scratch half of it because these additional terms override a good chunk of it".


bullen on Feb 12, 2022 at 11:27 pm  [-]
Ok, I agree but then I also have less time to work on real things and honestly I feel the whole legal/money part of our civilization is a huge waste of time in the face of energy problems that can't go away (2nd law of thermodynamics and sunlight + photosyntesis) and that my platform tries to help with elegantly by being the most efficient solution for MMO networking.

I'm also sad that nobody has solved this license problem yet, there is obviously a need for it. Sometimes time solves all problems though, so I probably just have to wait a while and somebody makes exactly the right license.

But I'm going to allocate some time if somebody who is willing to pay approaches me with the same concerns (it's actually why I switched to MIT in the first place, Unreal does not allow you to use client plugins that are LGPL)...

Small steps, we'll get there!


dkersten on Feb 13, 2022 at 4:20 pm  [-]
> then I also have less time to work on real things

What? To keep your existing license you copy the text of the MIT license, add the statement about requiring a logo and remove the parts about being able to use the code commercially, and add an extra paragraph that has the text you already have: to use commercially you have to sponsor. It’s not about changing your terms, it’s about being clear about what license applies. Hybrid with MIT and other implies that the MIT license somehow applies yet it does not since your “other” invalidates a chunk of what the MIT license allows. Just removing those bits and not calling it MIT is enough.

If that significantly cuts into your time to do other stuff then I don’t know what to say.

> if somebody who is willing to pay approaches me with the same concerns

I’m not concerned about the license per se. It wouldn’t stop me from paying if I wanted to use your software. It’s just that to everyone looking, before even evaluating it, you’re sending a dishonest message, that somehow the MIT license applies when it clearly does not.

I would create a license.txt file that contains a copy of the MIT license text with the commercial use phrasing removed, the need for displaying logo added a second paragraph before the warranty disclaimer stating that you may use the software commercially so long as you sponsor (same text you already have). Then I would link it from the readme with an explanation: proprietary license that is similar to the MIT license except with the conditions of logo and for commercial use requiring sponsorship (existing text more or less). Clear, simple and should take you no more than ten minutes to fix.

My objection is that you are claiming it’s under MIT license and using the MIT licenses name recognition, while applying changes that very clearly make it not MIT license at all.


bullen on Feb 14, 2022 at 7:59 am  [-]
I didn't know how small the MIT license was!!!

"Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:"

Became

"Permission is hereby granted, to any person obtaining a copy of this software and associated documentation files (the "Software"), to use the Software, including the rights to copy, modify, merge, publish and/or distribute the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:"

And then I list the stuff...

1) You have to show the logo on startup.

2) You have to sponsor the fuse tier on gumroad while you are using the Software, or any derived Software, commercially:

https://tinspin.gumroad.com/l/xwluh

3) The .html and graphics are proprietary examples except the javascript in play.html

4) The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

it feels completely meaningless to put it that way... laws are really the most superficial waste of time, it only took 5 minutes to edit but it's a lifetime of trouble!

But thanks I guess, if it works... but you only know that later... possibly eons later!


dkersten on Feb 14, 2022 at 1:45 pm  [-]
Great, that's all I wanted. I hope it didn't take you more than a few minutes to do.

> I didn't know how small the MIT license was!!!

This is kind of a problem, before you were saying its like the MIT license but if you didn't know how small it was, you couldn't have read it, so...

Anyway, no point in beating a dead horse, you have corrected the issue I had, so thank you!

> laws are really the most superficial waste of time

Until you need them, at least.

In any case, good luck with your project. It does technically look very interesting.


smw on Feb 12, 2022 at 6:30 pm  [-]
I respect your right to license you product however you want, but please don't call that open source.

Spivak on Feb 12, 2022 at 9:08 pm  [-]
Requiring attribution doesn’t make something not open source. At best this means that the example code isn’t open source.

account42 on Feb 14, 2022 at 9:12 am  [-]
Requiring attribution is not the problem, but restricting commercial use makes this not an open source license.

bullen on Feb 12, 2022 at 6:42 pm  [-]
Open-source also means the source is open, what you are looking for is free and honestly nothing is free... if you have a better term I'm open for suggestions.

But really open-source (as in free) is the misnomer here, it should be called free open-source, or FOSS as some correctly name it.


e12e on Feb 12, 2022 at 7:05 pm  [-]
That battle has been fought already, and the accepted term is "source available", not "open source". (And gnu adds Free or "libre" software, which is software licence in a way that tries to ensure the "four freedoms" for all downstream users of software - such as freedom zero - the right to run software (no need for eg: cryptographic signature/trusted software - without a way for the user to define trust).

bullen on Feb 12, 2022 at 7:37 pm  [-]
Ok, fixed it elsewhere and in my brain... :/ Thx! Can't edit the comment though.

Plasmoid on Feb 12, 2022 at 7:12 pm  [-]
I've seen these referred to as "Source-available licenses". This would cover things like Mongo's SSPL.

The bare reality is that it's just a commercial license.


dragonwriter on Feb 12, 2022 at 7:08 pm  [-]
> FOSS

FOSS or F/OSS is a combination of Free (as defined by the FSF) and Open Source (as defined by the OSI) (the last S is Software), which recognizes that the two terms, while they come from groups with different ideological motivations, refer to approximately the same substantive licensing features and almost without exception the same set of licenses.


hamburglar on Feb 12, 2022 at 8:33 pm  [-]
Personally, while I appreciate the difference from a promotion-of-FOSS point of view, I find it obnoxious that FOSS idealists think they can dictate the usage of the generic phrase “open source” and start these kinds of arguments in threads where non-completely-free software whose source is open comes up. We haven’t all agreed on your terminology, and the argument is not “settled” except in the minds of the folks who think everyone should be on board with making this purity distinction. Some people find the distinction uninteresting and don’t need to bother themselves with the ideological argument or agree to its terminology. And trying to be the arbiters of language is not a good look for the “information wants to be free” crowd.

gmfawcett on Feb 13, 2022 at 10:03 pm  [-]
> I find it obnoxious that FOSS idealists think they can dictate the usage of the generic phrase “open source”

Since when was "open source" generic? These obnoxious idealists you're complaining about are the people who invented the term in the first place.


hamburglar on Feb 14, 2022 at 4:22 pm  [-]
Since the several decades before a few people decided to co-opt it for strategic political reasons. You apparently don’t remember the arguments over whether they should be called “free software” or “open source.” Both terms were already in use. I grew up downloading shareware, some of which was open source and some of which was not. It almost universally came with a limited use license and a request for some money if you used it. This is how Unix started too. Limited license with open source code. You can send your changes back to us but you can’t distribute them.

hamburglar on Feb 14, 2022 at 6:44 pm  [-]
RMS has even published an essay talking about how the term “open source” is a poor choice because it has an obvious common sense definition that means “you can see the source.”

bullen on Feb 13, 2022 at 12:42 am  [-]
I agree, but let's give them "source available" and maybe they'll be more inclined to help?

We all need to get out of the current legal/monetary system soon enough.


hamburglar on Feb 13, 2022 at 1:51 am  [-]
You give ‘em whatever you want. :) I think I’ll publish my next open source project with a license that permits no use whatsoever. Code provided for entertainment purposes only.

bastawhiz on Feb 12, 2022 at 4:01 pm  [-]
Can you explain how H2 would make it harder to scale SSE?

bullen on Feb 12, 2022 at 4:21 pm  [-]
The mistake they did was to assume only one TCP socket should be used; the TCP has it's own head-of-line limitations just like HTTP/1.1 has if you limit the number of sockets (HTTP/1.1 had 2 sockets allowed per client, but Chrome doesn't care...) it's easily solvable by using more sockets but then you get into concurrency problems between the sockets.

That said if you, like SSE on HTTP/1.1; use 2 sockets per client (breaking the RFC, one for upstream and one for downstream) you are golden but then why use HTTP/2 in the first place?

HTTP/2 creates more problems than solutions and so does HTTP/3 unfortunately until their protocol fossilizes which is the real feature of a protocol, to become stable so everyone can rely on things working.

In that sense HTTP/1.1 is THE protocol of human civilization until the end of times; together with SMTP (the oldest protocol of the bunch) and DNS (which is centralized and should be replaced btw).


jupp0r on Feb 12, 2022 at 4:33 pm  [-]
The issues with TCP head-of-line blocking are resolved in HTTP/3 (QUIC).

bullen on Feb 12, 2022 at 4:44 pm  [-]
Sure but then HTTP/3 is still binary and it's in flux meaning most routers don't play nice with it yet and since HTTP/1.1 works great for 99.9% of the usecases I would say it's a complete waste of time, unless you have some new agenda to push.

Really people should try and build great things on the protocols we have instead of always trying to re-discover the wheel, note: NOT the same as re-inventing the wheel: http://move.rupy.se/file/wheel.jpg


sleepydog on Feb 12, 2022 at 11:52 pm  [-]
TCP is not perfect. There is ambiguity between the acknowledgement of a segment and the retransmission of a segment that we have tried to address with extensions and heuristics. It can introduce latency for upper protocols which are comprised of fixed-length messages. SACK reneging is a crime against humanity.

It's amazing that we've been able to adapt the protocol in a backwards-compatible fashion for over 30 years, but QUIC addresses problems with TCP in ways that could not be done in a backwards-compatible fashion. Personally I wish the protocol were simpler, but I lack the expertise to say what should be removed.


bushbaba on Feb 12, 2022 at 6:32 pm  [-]
For FANG scale, a 1% performance improvement for certain services has measurable business results.

Take Snap. Say they reduced time to view a snap by 10ms. After 100 snaps that’s an additional 1 second of engagement. This could equate to an additional ad impression every week per user. Which is many millions of additional revenue.


vlovich123 on Feb 12, 2022 at 6:34 pm  [-]
HTTP/3 is E2E encrypted and built on UDP. What does “most routers don’t play nice with it yet” mean in that context? Do you mean middleware boxes/routers rather than end user routers?

anderspitman on Feb 12, 2022 at 6:56 pm  [-]
For me the question is not so much "yet" as "maybe never", since some networks block UDP altogether, and HTTP/3 has a robust fallback mechanism.

ikiris on Feb 12, 2022 at 7:30 pm  [-]
It means they don't actually understand networking, but think they do.

klabb3 on Feb 12, 2022 at 8:01 pm  [-]
> HTTP/3 is E2E encrypted

Please elaborate.


homarp on Feb 12, 2022 at 10:56 pm  [-]
https://www.youtube.com/watch?v=J4fR5aztSwQ - Securing the Next Version of HTTP: How QUIC and HTTP/3 Compare to HTTP/2

"QUIC is a new always-encrypted general-purpose transport protocol being standardized at the IETF designed for multiplexing multiple streams of data on a single connection. HTTP/3 runs over QUIC and roughly replaces HTTP/2 over TLS and TCP. QUIC combines the cryptographic and transport handshakes in a way to allow connecting to a new server in a single round trip and to allow establishing a resumed connection in zero round trips, with the client sending encrypted application data in its first flight. QUIC uses TLS 1.3 as the basis for its cryptographic handshake.

This talk will provide an overview of what the QUIC protocol does and how it works, and then will dive deep into some of the technical details. The deep dive will focus on security-related aspects of the protocol, including how QUIC combines the transport and cryptographic handshakes, and how resumption, including zero-round-trip resumption works. This will also cover how QUIC’s notion of a connection differs from the 5-tuple sometimes used to identify connections, and what QUIC looks like on the wire.

In addition to covering details of how QUIC works, this talk will also address implementation and deployment considerations. This will include how a load balancer can be used with cooperating servers to route connections to a fleet of servers while still maintaining necessary privacy and security properties. It will also look back at some of the issues with HTTP/2 and discuss which ones may need to be addressed in QUIC implementations as well or are solved by the design of QUIC and HTTP/3."


klabb3 on Feb 13, 2022 at 6:53 am  [-]
I think the common definition of e2e encryption covers user-to-user communication, so I'm confused how a transport protocol can offer e2e encryption at all (it would only do so if Quic is used over p2p between users, but that's a property of the application).

But even if the definition were different, http+tls would also be e2e encrypted (if used in conjunction which it pretty much always is).

I appreciate Quic but from a security perspective I don't see how it's different to what we've had for at least a decade.


vlovich123 on Feb 13, 2022 at 10:15 pm  [-]
The difference is that the protocol itself is also encrypted (not just the application layer). In other words middleware can’t ossify the QUIC protocol and you’re not reliant on middleware to do anything other than route UDP (which lets you do whatever you want to the protocol itself).

jupp0r on Feb 14, 2022 at 2:50 pm  [-]
QUIC had 0-roundtrip handshakes and brought it to TLS 1.3.

fwsgonzo on Feb 12, 2022 at 6:29 pm  [-]
While I agree, we shouldn't discount one less RTT for encrypted connections. Latency is a problem that never really goes away, and we can only try to reduce RTTs.

jupp0r on Feb 14, 2022 at 2:56 pm  [-]
If people had tried to build great things on the protocols we have instead of re-discovering the wheel, we'd still have gopher, FTP and telnet for most things. Technology evolves and that's a good thing.

jupp0r on Feb 14, 2022 at 10:16 pm  [-]
Routers are operating on the IP layer of the network stack, they don't have anything to do with application level protocols.

y4mi on Feb 12, 2022 at 6:26 pm  [-]
> and DNS (which is centralized and should be replaced btw

So much nonsense in a single paragraph, amazing.

If anything DNS is less centralized then http and SMTP. Its a surprisingly complicated system for what it does because of all the caching etc, but calling it more centralized then http is just is just ignorant to a silly degree


bastawhiz on Feb 13, 2022 at 2:23 am  [-]
Sorry, what do you mean by "one for upstream and one for downstream"? You can't send messages back to the server with SSE.

dlsa on Feb 12, 2022 at 9:20 pm  [-]
Couldn't find a license file in the root folder of that github. I found a license in a cpp file buried in the sec folder. You should consider putting the licensing for this kind of project in a straightforward and locatable place.

bullen on Feb 13, 2022 at 12:50 am  [-]
That license in the cpp file is for the SHA256 code.

My license is messy but if you search for "license" on the main github page you'll eventually find MIT + some ugly modifications I made.


bullen on Feb 14, 2022 at 8:39 am  [-]
I now added a license.txt

shams93 on Feb 13, 2022 at 3:03 am  [-]
Not just great for games but for large scale webrtc signaling for p2p.

smashah on Feb 12, 2022 at 10:47 pm  [-]
Love your hybrid model via gumroad! I do something similar for my own open-source project

https://github.com/open-wa/wa-automate-nodejs

There should be some sort of support group for those of us trying to monetize (sans donations) our open source projects!


bullen on Feb 12, 2022 at 11:33 pm  [-]
I just found out gumroad pays VAT on recurring payments a couple of weeks ago.

We probably need a new license though because piggybacking on MIT (or any other license) like I try to do is rubbing people the wrong way.

But law and money are my least favourite passtimes, so I'm going to let somebody else do it first unless somebody is willing to force this change by buying a license and asking for a better license text.


account42 on Feb 14, 2022 at 9:19 am  [-]
> open source projects

Your source-available projects. Nothing wrong with licensing your work that way (in the sense that you can make that choice, not in the sense that I think its a good idea) but please don't muddle the term "open source".


bullen on Feb 15, 2022 at 6:35 pm  [-]
Well technically rupy is open-source (ALGPL, yet another license that still doesn't exist) and since fuse (source-available) is built on top, you can maybe call it open-source, specially since rupy is like 90% of the code.

smashah on Feb 16, 2022 at 9:59 am  [-]
Thanks for your input I will still continue to use whatever phrasing I see fit :)

shams93 on Feb 12, 2022 at 8:38 pm  [-]
Its a lot easier to scale than websockets where you need a pub sub solution and a controller to published shared state changes. See is really simply incomparision

stavros on Feb 12, 2022 at 4:49 pm  [-]
Probably not the same person, but did you ever play on RoD by any chance?

bullen on Feb 12, 2022 at 4:52 pm  [-]
Probably not, since I dont know what RoD is.

stavros on Feb 12, 2022 at 4:55 pm  [-]
I thought so, thanks!

herodoturtle on Feb 12, 2022 at 4:19 pm  [-]
Nice work, thanks for sharing.

mmcclimon on Feb 12, 2022 at 4:38 pm  [-]
SSEs are one of the standard push mechanisms in JMAP [1], and they're part of what make the Fastmail UI so fast. They're straightforward to implement, for both server and client, and the only thing I don't like about them is that Firefox dev tools make them totally impossible to debug.

1. https://jmap.io/spec-core.html#event-source


chrismorgan on Feb 13, 2022 at 1:17 pm  [-]
It is, however, interesting to note that Fastmail’s webmail doesn’t use EventSource, but instead implements it atop fetch or XMLHttpRequest. An implementation atop XMLHttpRequest was required in the past because IE lacked EventSource, but had that been the only reason, it’d just have been done polyfill style; but it’s not. My foggy recollection from 4–5 years ago (in casual discussion while I worked for Fastmail) is that it had to do with getting (better?) control over timeout/disconnect/reconnect, probably handling Last-Event-ID, plus maybe skipping browser bugs in some older (now positively ancient and definitely unsupported) browsers. The source for that stuff is the three *EventSource.js files in https://github.com/fastmail/overture/tree/master/source/io.

ok_dad on Feb 12, 2022 at 7:04 pm  [-]
> the only thing I don't like about them is that Firefox dev tools make them totally impossible to debug

You can't say that and not say more about it, haha. Please expand on this?

Also, I'm a Fastmail customer and appreciate the nimble UI, thanks!


coder543 on Feb 12, 2022 at 7:53 pm  [-]
I think their information could be outdated. Since Firefox 82, you can supposedly inspect the content of SSE streams: https://developer.mozilla.org/en-US/docs/Tools/Network_Monit...

Before that... yeah, the Firefox dev tools were not very helpful for SSE.


mmcclimon on Feb 12, 2022 at 8:47 pm  [-]
Hmm! You're right that I hadn't looked it a while, so I checked before making the comment above. I'm still seeing the same thing I always have, which is "No response data available for this request". Possibly something is slightly wrong somewhere (though Chrome dev tools seem fine on the same), but you've given me something to look into, thanks!

coder543 on Feb 12, 2022 at 8:56 pm  [-]
That is interesting. I just tested it myself, and at least for my setup (Firefox on Mac on ARM), the events only showed up in the dev tools if the server closed the SSE connection... so, maybe Firefox still hasn't fully fixed this problem.

mmcclimon on Feb 12, 2022 at 9:57 pm  [-]
Yeah, that seems to be the case (confirmed with their little example at https://github.com/mdn/dom-examples/tree/master/server-sent-...). Once the connection is closed you can see things, but that's not particularly useful for debugging!

noisy_boy on Feb 13, 2022 at 1:22 am  [-]
Funny because I love fastmail but literally the only complaint I have is their android app takes too long to load

dnr on Feb 12, 2022 at 9:58 pm  [-]
The Fastmail UI is indeed snappy, except when it suddenly decides it has to reload the page, which seems to be multiple times a day these days (and always when I need to search for a specific email). Can you make it do what one of my other favorite apps does: when there's a new version available, make a small pop up with a reload button, but don't force a reload (until maybe weeks later)?

mythz on Feb 12, 2022 at 4:16 pm  [-]
We use SSE for our APIs Server Events feature https://docs.servicestack.net/server-events with C#, JS/TypeScript and Java high-level clients.

It's a beautifully simple & elegant lightweight push events option that works over standard HTTP, the main gotcha for maintaining long-lived connections is that server/clients should implement their own heartbeat to be able to detect & auto reconnect failed connections which was the only reliable way we've found to detect & resolve broken connections.


dabeeeenster on Feb 12, 2022 at 8:58 pm  [-]
"the main gotcha for maintaining long-lived connections is that server/clients should implement their own heartbeat to be able to detect & auto reconnect failed connections"

That sounds like a total nightmare!


ec109685 on Feb 12, 2022 at 9:42 pm  [-]
Definitely needed. That would be true for all long lived connection protocols in order to detect connection interruptions in a timely fashion.

easrng on Feb 13, 2022 at 3:30 am  [-]
Though in JS an EventSource does automatically try to reconnect once it notices the connection is dropped, unlike a WebSocket.

mythz on Feb 15, 2022 at 9:13 am  [-]
It's not good enough in our experience, EventSource can still think it's connected when the server can no longer push data onto it. The periodic heartbeat to verify messages can still be sent on the connection is the only reliable way we've found to detect & autoretry failed connections.

szastamasta on Feb 12, 2022 at 3:59 pm  [-]
My experience with sse is pretty bad. They are unreliable, don’t support headers and require keep-alive hackery. In my experience WebSockets are so much better.

Also ease of use doesn’t really convince me. It’s like 5 lines of code with socket.io to have working websockets, without all the downsides of sse.


88913527 on Feb 12, 2022 at 5:55 pm  [-]
HTTP headers must be written before the body; so once you start writing the body, you can't switch back to writing headers.

Server-sent events appears to me to just be chunked transfer encoding [0], with the data structured in a particular way (at least from the perspective of the server) in this reference implementation (tl,dr it's a stream):

https://gist.github.com/jareware/aae9748a1873ef8a91e5#file-s...

[0]: https://en.wikipedia.org/wiki/Chunked_transfer_encoding


patrickthebold on Feb 12, 2022 at 8:04 pm  [-]
Maybe I misunderstood your claim, but there is: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Tr...

Which seems to be what you need to send 'headers' after a chunked response.


88913527 on Feb 12, 2022 at 9:45 pm  [-]
You understood correctly; I was mis-informed. Today I learned about the "Trailer" header. I'm curious how HTTP clients handle that. A client like window.fetch will resolve with a headers object-- does it include the trailers or not? I'd have to test it out.

tehbeard on Feb 12, 2022 at 11:21 pm  [-]
More so the issue is the "native" browser client, EventSource [1] is pretty much just url and "withCredentials" (send/don't send cookies)

You can kludge it with fetch(...) and the body stream

1. <https://developer.mozilla.org/en-US/docs/Web/API/EventSource>


yencabulator on Feb 15, 2022 at 7:52 pm  [-]
SSE comes from a time when browser APIs toward Javascript forced a full download of the response body before it was handed to the Javascript.

With current day APIs, including streaming response bodies in the fetch API, SSE would probably not have been standardized as a separate browser API.


ricardobeat on Feb 12, 2022 at 5:51 pm  [-]
Mind expanding on your experience and how are websockets more reliable than SSE? one of the main benefits of SSE is reliability from running on plain HTTP.

dnautics on Feb 12, 2022 at 6:15 pm  [-]
I've done both. One big one is Sse connections will eventually time out, and you WILL have to renegotiate, so there will be a huge latency spike on those events. They are easier in elixir than most pls, but honestly if you're using elixir, you might as well use phoenix's builtin we socket support.

odonnellryan on Feb 12, 2022 at 8:40 pm  [-]
How is this different from websockets? They will eventually close for various reasons, sometimes in not obvious ways.

bullen on Feb 12, 2022 at 6:20 pm  [-]
Not if you send "noop" messages.

dnautics on Feb 12, 2022 at 8:47 pm  [-]
In my experience, sse times out way more than ws, even if you are always sending (I was streaming jpegs using sse).

bullen on Feb 12, 2022 at 11:40 pm  [-]
I think it might be your ISP. For example one of my ISPs cut off my SSH connections no matter what I do. They simply dislike hanging SSH connections.

It's just random that your ISP like WebSockets more than long HTTP responses, and it can change in a heartbeat and for most people it will be different. As I said before 99,6% successful networking is an unheard of number for real-time multiplayer games.

I only care about that number, until you proove with hard stats and 350.000 real users from everywhere on the planet that WebSocket has 99,7% success rate, I'm not even going to flinch.


dnautics on Feb 13, 2022 at 5:04 am  [-]
It's not random. Having limited timeouts for http is policy set at often time several layers to prevent certain types of security regressions.

bullen on Feb 13, 2022 at 7:52 am  [-]
Ok, what security regressions?

dnautics on Feb 13, 2022 at 10:36 pm  [-]
I am sure you can figure it out. I'm not bullshitting you.

mikojan on Feb 12, 2022 at 4:12 pm  [-]
What? How do they not support headers?

You have to send "Content-Type: text/event-stream" just to make them work.

And you keep the connection alive by sending "Connection: keep-alive" as well.

I've never had any issues using SSEs.


szastamasta on Feb 12, 2022 at 5:13 pm  [-]
I mean you cannot send stuff from client. If you’re using tokens for auth and don’t want to use session cookies, you end with ugly polyfils.

coder543 on Feb 12, 2022 at 6:00 pm  [-]
> If you’re using tokens for auth and don’t want to use session cookies

That sounds like a self-inflicted problem. Even if you’re using tokens, why not store them in a session cookie marked with SameSite=strict, httpOnly, and secure? Seems like it would make everything simpler, unless you’re trying to build some kind of cross-site widget, I guess.


szastamasta on Feb 12, 2022 at 7:43 pm  [-]
I need to work with more than 1 backend :)

coder543 on Feb 12, 2022 at 9:00 pm  [-]
This is such an opaque response, I don't know what else could be said. If you're sending the same token to multiple websites, something feels very wrong with that situation. If it's all the same website, you can have multiple backends "mounted" on different paths, and that won't cause any problems with a SameSite cookie.

szastamasta on Feb 12, 2022 at 9:29 pm  [-]
Then you need a single point of failure that is handling session validation. Without it part of your app might work even without your sessions storage.

coder543 on Feb 12, 2022 at 9:40 pm  [-]
You can store a JWT in a session cookie. You don’t need a SPoF for session validation, if that’s not what you want.

tekknik on Feb 17, 2022 at 6:39 am  [-]
> Also ease of use doesn’t really convince me. It’s like 5 lines of code with socket.io to have working websockets, without all the downsides of sse.

You can also implement websockets in 5 lines (less, really 1-3 for a basic implementation) without socket.ii. Why are you still using it?


bullen on Feb 12, 2022 at 4:27 pm  [-]
They don't support headers in javascript, that is more a problem with javascript than SSE.

Read my comment below about that.


jFriedensreich on Feb 12, 2022 at 4:25 pm  [-]
sounds like you did not really evaluate both technologies at the heart but only some libraries on top?

szastamasta on Feb 12, 2022 at 5:11 pm  [-]
Yeah, sorry. In socket.io it’s 2 lines. You need 5 lines with browser APIs :).

You simply get stuff like auto-reconnect and graceful failover to long polling for free when using socket.io


coder543 on Feb 12, 2022 at 5:51 pm  [-]
SSE EventSource also has built-in auto-reconnect, and it doesn’t even need to support failover to long polling.

Neither of those being built into a third party websocket library are actually advantages for websocket… they just speak to the additional complexity of websocket. Plus, long polling as a fallback mechanism can only be possible with server side support for both long polling and websocket. Might as well just use SSE at that point.


tekknik on Feb 17, 2022 at 6:44 am  [-]
2 lines vs 5 lines. did you check your payload size after adding socket.io? you added way more than 2 lines.

long polling shouldn’t be needed anymore, and auto reconnect is trivial to implement.


pier25 on Feb 13, 2022 at 5:03 pm  [-]
WebSockets are also quite unreliable but Socket IO hides all this from you.

tekknik on Feb 17, 2022 at 6:43 am  [-]
socket.io doesn’t do as much as the bloat warrants. implementing the same features (heartbeats and reconnects) takes minimal code. socket.io was useful when certain browsers didn’t support websockets, now it’s used mostly by people scared of sockets imo

leeoniya on Feb 12, 2022 at 4:23 pm  [-]
the biggest drawback with SSE, even when unidirectional comm is sufficient is

> SSE is subject to limitation with regards to the maximum number of open connections. This can be especially painful when opening various tabs as the limit is per browser and set to a very low number (6).

https://ably.com/blog/websockets-vs-sse

SharedWorker could be one way to solve this, but lack of Safari support is a blocker, as usual. https://developer.mozilla.org/en-US/docs/Web/API/SharedWorke...

also, for websockets, there are various libs that handle auto-reconnnects

https://github.com/github/stable-socket

https://github.com/joewalnes/reconnecting-websocket

https://dev.to/jeroendk/how-to-implement-a-random-exponentia...


coder543 on Feb 12, 2022 at 6:07 pm  [-]
This isn’t a problem with HTTP/2. You can have as many SSE connections as you want across as many tabs as the user wants to use. Browsers multiplex the streams over a handful of shared HTTP/2 connections.

If you’re still using HTTP/1.1, then yes, this would be a problem.


leeoniya on Feb 12, 2022 at 6:24 pm  [-]
hmmm, you might be right. i wonder what steered me away. maybe each SSE response re-sends headers, which can be larger than the message itself?

maybe it was inability to do broadcast to multiple open sse sockets from nodejs.

i should revisit.

https://medium.com/blogging-greymatter-io/server-sent-events...


xg15 on Feb 13, 2022 at 4:23 pm  [-]
> maybe each SSE response re-sends headers, which can be larger than the message itself?

That's (long) polling, not SSE. The only overhead for SSE are the "data", "event", etc pseudo header names and possibly some chunked-encoding markers. Both are tiny though.


bullen on Feb 12, 2022 at 4:25 pm  [-]
It used to be 2 sockets per client, so now it's 6?

Well it's a non-problem, if you need more bandwith than one socket in each direction can provide you have much bigger problems than the connection limit; which you can just ignore.


leeoniya on Feb 12, 2022 at 4:45 pm  [-]
the problem is multiple tabs. if you have, e.g. a bunch of Grafana dashboards open on multiple screens in different tabs (on same domain), you will exhaust your HTTP connection limit very quickly with SSE.

in most cases this is not a concern, but in some cases it is.


bullen on Feb 12, 2022 at 4:50 pm  [-]
Aha, ok yes then you would need to have many subdomains?

Or make your own tab system inside one browser tab.

I can see why that is a problem for some.


easrng on Feb 13, 2022 at 3:39 am  [-]
Another way to solve it could be using a BroadcastChannel to communicate between tabs, do some kind of leader election to figure out which one should start the EventSource, and then have the leader relay the events over the channel.

hishamp on Feb 12, 2022 at 10:05 pm  [-]
We moved away from WebSockets to SSE, realised it wasn't makings thing any better. In fact, it made things worse, so we switched back to WebSockets again and worked on scaling WebSockets. SSE will work much better for other cases, just didn't work out for our case.

First reason was that it was an array of connections you loop through to broadcast some data. We had around 2000 active connections and needed a less than 1000ms latency, with WebSocket, even though we faced connections drops, client received data on time. But in SSE, it took many seconds to reach some clients, since the data was time critical, WebSocket seemed much easier to scale for our purposes. Another issue was that SSE is like an idea you get done with HTTP APIs, so it doesn't have much support around it like WS. Things like rooms, clientIds etc needed to be managed manually, which was also a quite big task by itself. And a few other minor reasons too combined made us switch back to WS.

I think SSE will suit much better for connections where bulk broadcast is less, like in shared docs editing, showing stuff like "1234 users is watching this product" etc. And keep in mind that all this is coming from a mediocre full stack developer with 3 YOE only, so take it with a grain of salt.


DaiPlusPlus on Feb 12, 2022 at 10:30 pm  [-]
Your write-up sounds like your issues with SSE stemmed from the framework/platform/server-stack you're using rather than of any problems inherent in SSE.

I haven't observed any latency or scaling issues with SSE - on the contrary: in my ASP.NET Core projects, running behind IIS (with QUIC enabled), I get better scaling and throughput with SSE compared to raw WebSockets (and still-better when compared to SignalR), though latency is already minimal so I don't think that can be improved upon.

That said, I do prefer using the existing pre-built SignalR libraries (both server-side and client-side: browser and native executables) because the library's design takes away all the drudgery.


hishamp on Feb 13, 2022 at 5:10 am  [-]
Yes, that might be the case, it might be due to the implementation issues from my part. And in that case, instead of taking too much time configuring SSE, we decided to move back to WS which already had resources in place to easily get things done, which worked well for us. Not against SSE, just saying it didn’t work out for us for our specific requirements with our team size and expertise. Just a heads up, that’s all.

Lorin on Feb 13, 2022 at 12:45 am  [-]
DaiPlusPlus on Feb 13, 2022 at 3:27 am  [-]
Huh - that article is weird, I've been using QUIC in IIS on Windows Server "2004" (2020-04) for almost 2 years now, see https://serverfault.com/questions/824278/can-iis-serve-my-ne...

Maybe it's still in testing in Server 2022 but fine in 2004?


justinsaccount on Feb 12, 2022 at 11:07 pm  [-]
As the other comment says, there's nothing inherent to SSE that would have made it slower than websockets. Ultimately they are both just bytes being sent across a long lived tcp connection.

Sounds like the implementation you were using was introducing the latency.


etimberg on Feb 12, 2022 at 11:18 pm  [-]
Do you have any good resources on making websockets scale / stable?

nly on Feb 12, 2022 at 10:38 pm  [-]
Checkout nchan.io

rawoke083600 on Feb 12, 2022 at 3:17 pm  [-]
I like them, they surprisingly easy to use..

One example where i found it to be not the perfect solution was with a web turn-based game.

The SSE was perfect to update gamestate to all clients, but to have great latency from the players point of view whenever the player had to do something, it was via a normal ajax-http call.

Eventually I had to switch to uglier websockets and keep connection open.

Http-keep-alive was that reliable.


coder543 on Feb 12, 2022 at 5:03 pm  [-]
With HTTP/2, the browser holds a TCP connection open that has various streams multiplexed on top. One of those streams would be your SSE stream. When the client makes an AJAX call to the server, it would be sent through the already-open HTTP/2 connection, so the latency is very comparable to websocket — no new connection is needed, no costly handshakes.

With the downsides of HTTP/1.1 being used with SSE, websockets actually made a lot of sense, but in many ways they were a kludge that was only needed until HTTP/2 came along. As you said, communicating back to the server in response to SSE wasn’t great with HTTP/1.1. That’s before mentioning the limited number of TCP connections that a browser will allow for any site, so you couldn’t use SSE on too many tabs without running out of connections altogether, breaking things.


rawoke083600 on Feb 14, 2022 at 7:33 am  [-]
>the browser holds a TCP connection open that has various streams multiplexed on top. One of those streams would be your SSE stream. When the client makes an AJAX call to the server, it would be sent through the already-open HTTP/2 connection

Very interesting ! I honestly didn't know that, or even think about it like that ! #EveryDayYouLearn :)


xg15 on Feb 13, 2022 at 4:32 pm  [-]
I wonder if websockets "accidentally" circumvented HTTP2's head-of-line blocking problem and therefore appeared to have better latency:

SSE streams are multiplexed into a HTTP2 stream, so they can suffer from congestion issues caused by unrelated requests.

In contrast, HTTP2 does not support websockets, so each websocket connection always has its own TCP connection. Wasteful, but ensures that no head-of-line blocking can occur.

So it might be that switching from SSE to websockets gave better latency behaviour, even though it had nothing to do with the actual technologies.

Of course, this issue should be solved anyway with HTTP3.


coder543 on Feb 13, 2022 at 4:47 pm  [-]
> Wasteful, but ensures that no head-of-line blocking can occur.

That’s not how head-of-line blocking works. Just having a single stream doesn’t guarantee no blocking. It’s not really about unrelated requests getting in the way and sucking up bandwidth (that’s a separate issue, and arguably applies regardless of how many TCP connections you have), head-of-line blocking is about how TCP handles retransmission of lost packets. Websocket suffers from head-of-line blocking too, which is a reason that WebTransport is being developed.

Certainly, if you have other requests in flight, you could have head-of-line blocking because of a packet that was dropped in a response stream that isn’t related to your SSE stream, but this only applies if there’s packet loss, and the packets that were lost could just as easily be SSE’s or websocket’s.

I agree that HTTP/3 should solve the issue of head-of-line blocking being caused by packets lost from an unrelated stream, but it doesn’t prevent it from occurring entirely.

My understanding (which could be wrong) is that WebTransport is supposed to offer the ability to send and receive datagrams with closer to UDP-level guarantees, allowing the application to continue receiving packets even when some go missing, and then the application can decide how to handle those missing packets, such as asking for retransmission. Getting an incomplete stream at the application level is what it takes to entirely avoid head-of-line blocking.

As alluded to earlier, there is zero head-of-line blocking if there is no packet loss. Outside of congested networks or the lossy fringes of cell service, I really wonder how much of an issue this is. I’m skeptical that it adds any latency for SSE vs websocket in the average benchmark or use case. The latency should be nearly identical. Your comment seems predicated on it definitely being worse, but based on what numbers? I admit it’s been a couple of years since I measured this myself, but I came away with the conclusion that websockets are massively overrated. There are definitely a handful of use cases for websockets, but… it shouldn’t be the tool everyone reaches for.

HTTP/3 is really meant to be an improvement for a small percentage of connections, which is a huge number of people at the scale that Google operates at. I don’t think there are really any big downsides to HTTP/3, so I look forward to seeing it mature, become more common, and become easier to find production grade libraries for.


xg15 on Feb 13, 2022 at 4:55 pm  [-]
> Certainly, if you have other requests in flight, you could have head-of-line blocking because of a packet that was dropped in a response stream that isn’t related to your SSE stream, but this only applies if there’s packet loss, and the packets that were lost could just as easily be SSE’s or websocket’s.

That was what I meant. Yes, head-of-line blocking can occur everywhere there is TCP, but with HTTP2, the impact is larger due to the (otherwise very reasonable) multiplexing: When a HTTP2 packet is lost, this will stall all requests that are multiplexed into this connection, whereas with websocket, it will only stall the websocket connection itself.


coder543 on Feb 13, 2022 at 4:57 pm  [-]
Yep, makes sense

rndgermandude on Feb 13, 2022 at 5:44 am  [-]
>no new connection is needed, no costly handshakes.

No new connection and no low-level connection (TCP, TLS) handshakes, but the server still has to parse and validate the http headers, route the request, and you'd probably still have to authenticate each request somehow (some auth cookie probably), which actually may start using a non-trivial amount of compute when you have tons of client->server messages per client and tons of clients.


bullen on Feb 12, 2022 at 3:19 pm  [-]
You just needed to send a "noop" (no operation) message at regular intervals.

jcelerier on Feb 12, 2022 at 4:43 pm  [-]
that puts it instantly in the "fired if you ever use it" bin

Vosporos on Feb 12, 2022 at 8:51 pm  [-]
Fired for using a keep-alive message???

johnny22 on Feb 12, 2022 at 10:59 pm  [-]
I think it comes down to whether your your communication is more oriented towards sending than receiving. If the clients receive way more than they send, then SSE is probably fine, but if it's truly bidirectional then it might not work as well.

kreetx on Feb 12, 2022 at 4:14 pm  [-]
SSEs had a severe connection limit, something like 4 connections per domain per browser (IIRC), so if you had four tabs open then opening new ones would fail.

coder543 on Feb 12, 2022 at 4:46 pm  [-]
Browsers also limit the number of websocket connections. But, if you're using HTTP/2, as you should be, then the multiplexing means that you can have effectively unlimited SSE connections through a limited number of TCP connections, and those TCP connections will be shared across tabs.

(There's one person in this thread who is just ridiculously opposed to HTTP/2, but... HTTP/2 has serious benefits. It wasn't developed in a vacuum by people who had no idea what they were doing, and it wasn't developed aimlessly or without real world testing. It is used by pretty much all major websites, and they absolutely wouldn't use it if HTTP/1.1 was better... those major websites exist to serve their customers, not to conspiratorially push an agenda of broken technologies that make the customer experience worse.)


kreetx on Feb 14, 2022 at 3:05 pm  [-]
Right, but this article argues that SSE is simple and easy to debug on the wire - so is http1. Http2 is easy to set up, so are websockets, yet debugging the multiplexed http2 stream is is not that simple anymore.

The SSE connection limit is a nasty surprise once you run into it, it should have been mentioned.


coder543 on Feb 14, 2022 at 3:10 pm  [-]
> The SSE connection limit is a nasty surprise once you run into it, it should have been mentioned.

It does not apply to HTTP/2, as previously noted.

> Http2 is easy to set up, so are websockets, yet debugging the multiplexed http2 stream is is not that simple anymore.

I have literally never heard of anyone I personally know having to debug HTTP/2 on the wire. Unless you believe there are frequently bugs in the HTTP/2 implementation in your browser or the library you use, this just not a real concern. HTTP/2 has been around long enough that this is definitely not a concern of mine. I would be more worried about bugs with HTTP/3, since it is so new.

Websockets are also not especially easy to set up… they don’t work with normal HTTP servers and proxies, so you have to set up other infrastructure.


kreetx on Feb 19, 2022 at 8:12 am  [-]
Which web servers do they not work with? They have worked everything I've used thus far (which admittedly aren't many): nginx, warp (Haskell's embedded server), relayd (OpenBSD), all with easy setup.

It also seems that compression for websockets is supported in all major browsers.

The article's argument seems to be that ws adds complexity, but this is present in pretty much all web servers already, the user needs not to deal with it. (HTTP2, too, requires the same type of complexity for that matter)


anderspitman on Feb 12, 2022 at 7:01 pm  [-]
You can also make your HTTP/1.1 SSE endpoints available on multiple domains and have the client round-robin them. Obviously adds some complexity, but sometimes it's a tradeoff worth making for example if you're on lossy networks and trying to avoid HTTP/2 head-of-line blocking.

jcheng on Feb 12, 2022 at 5:04 pm  [-]
> Browsers also limit the number of websocket connections

True but the limit for websockets these days is in the hundreds, as opposed to 6 for regular HTTP requests.


coder543 on Feb 12, 2022 at 5:08 pm  [-]
https://stackoverflow.com/questions/26003756/is-there-a-limi...

It appears to be 30 per domain, not “hundreds”, at least as of the time this answer was written. I didn’t see anything more recent that contradicted this.

In practice, this is unlikely to be problematic unless you’re using multiple websockets per page, but the limit of 6 TCP connections is even less likely to be a problem if you’re using HTTP/2, since those will be shared across tabs, which isn’t the case for the dedicated connection used for each websocket.


jcheng on Feb 12, 2022 at 7:13 pm  [-]
It’s 255 for Chrome and has been since 2015, 200 for Firefox since longer than that.

https://chromium.googlesource.com/chromium/src/net/+/259a070...

Agree that it should be much less of a problem with HTTP/2 than HTTP/1.1.


oplav on Feb 12, 2022 at 4:41 pm  [-]
6 connections per domain per browser: https://bugs.chromium.org/p/chromium/issues/detail?id=275955

There are some hacks to work around it though.


reactor on Feb 13, 2022 at 1:21 am  [-]
Is it possible to have many SSE channels within one tab? ie. within a webpage, lets say there are 10 different widgets need realtime update.

mmzeeman on Feb 12, 2022 at 3:54 pm  [-]
Did research on SSE a short while ago. Found out that the mimetype "text/event-stream" was blocked by a couple of anti-virus products. So that was a no-go for us.

pornel on Feb 12, 2022 at 6:05 pm  [-]
It's not blocked. It's just that some very badly written proxies can try to buffer the "whole" response, and SSE is technically a never-ending file.

It's possible to detect that, and fall back to long polling. Send an event immediately after opening a new connection, and see if it arrives at the client within a short timeout. If it doesn't, make your server close the connection after every message sent (connection close will make AV let the response through). The client will reconnect automatically.

Or run:

    while(true) alert("antivirus software is worse than malware")

ronsor on Feb 12, 2022 at 4:14 pm  [-]
These days I feel like the only way to win against poorly designed antiviruses and firewalls is to—ironically enough—behave like malware and obfuscate what's going on.

bullen on Feb 12, 2022 at 4:29 pm  [-]
They don't block it, they cache the response until there is enough data in the buffer... just push more garbage data on the first chunks...

captn3m0 on Feb 12, 2022 at 4:19 pm  [-]
I was using SSE when they'd just launched (almost a decade ago now) and never faced any AV issues.

azinman2 on Feb 12, 2022 at 7:37 pm  [-]
Is that still the case now? How big and broad an audience do you have?

My experience, now a bit dated, is that long polling is the only thing that will work 100% of the time.


foxbarrington on Feb 12, 2022 at 3:47 pm  [-]
I’m a huge fan of SSE. In the first chapter of my book Fullstack Node.js I use it for the real-time chat example because it requires almost zero setup. I’ve also been using SSE on https://rambly.app to handle all the WebRTC signaling so that clients can find new peers. Works great.

viiralvx on Feb 12, 2022 at 4:35 pm  [-]
Rambly looks sick, thanks for sharing!

julianlam on Feb 12, 2022 at 6:18 pm  [-]
This is really interesting! I wonder why it never really took off, whereas websockets via Socket.IO/Engine.io did.

At NodeBB, we ended up relying on websockets for almost everything, which was a mistake. We were using it for simple call-and-response actions, where a proper RESTful API would've been a better (more scalable, better supported, etc.) solution.

In the end, we migrated a large part of our existing socket.io implementation to use plain REST. SSE sounds like the second part of that solution, so we can ditch socket.io completely if we really wanted to.

Very cool!


shahinghasemi on Feb 12, 2022 at 8:08 pm  [-]
> At NodeBB, we ended up relying on websockets for almost everything, which was a mistake.

Would you please elaborate on the challenges/disadvantages you've encountered in comparison to REST/HTTP?


julianlam on Feb 15, 2022 at 2:58 pm  [-]
Nothing major, just browser support at the beginning, reverse proxy support (which is no longer an issue), but the big one was extensibility.

As it turns out, while almost anyone can fire off a POST request, not many people know how to wire up a socket.io client.


samwillis on Feb 12, 2022 at 3:39 pm  [-]
I have used SSEs extensively, I think they are brilliant and massively underused.

The one thing I wish they supported was a binary event data type (mixed in with text events), effectively being able to send in my case image data as an event. The only way to do it currently is as a Base64 string.


keredson on Feb 12, 2022 at 5:25 pm  [-]
SSE supports gzip compression, and a gzip-ed base64 is almost as small as the original jpg:

$ ls -l PXL_20210926_231226615.*

-rw-rw-r-- 1 derek derek 8322217 Feb 12 09:20 PXL_20210926_231226615.base64

-rw-rw-r-- 1 derek derek 6296892 Feb 12 09:21 PXL_20210926_231226615.base64.gz

-rw-rw-r-- 1 derek derek 6160600 Oct 3 15:31 PXL_20210926_231226615.jpg


samwillis on Feb 12, 2022 at 5:43 pm  [-]
Quite true, however from memory Django doesn’t (or didn’t) support gzip on streaming responses and as we host on Heroku we didn’t want to introduce another http server such as Nginx into the Heroku Dyno.

As an aside, Django with Gevent/Gunicorn does SSE well from our experience.


jtwebman on Feb 12, 2022 at 4:45 pm  [-]
Send an event that tells the browser to request the binary image.

samwillis on Feb 12, 2022 at 4:51 pm  [-]
In my case I was aiming for low latency with a dynamically generated image. To send a url to a saved image, I would have to save it first to a location for the browser to download it form. That would add at least 400ms, probably more.

Ultimately what I did was run an SSE request and long polling image request in parallel, but that wasn’t ideal as I had to coordinate that on the backend.


bckr on Feb 12, 2022 at 5:26 pm  [-]
I'm curious if you could have kept the image in memory (or in Redis) and served it that way

samwillis on Feb 12, 2022 at 5:39 pm  [-]
That’s actually not too far from what we do. The image is created by a backend service with communication (queue and responses) to the front end servers via Redis. However rather than saving the image in its entirety to Redis, it’s streamed via it in chunks using LPUSH and BLPOP.

This lets us then stream the image as a steaming http response from the front end, potentially before the jpg has finished being generated on the backend.

So from the SSE we know the url the image is going to be at before it’s ready, and effectively long poll with a ‘new Image()’.


dpweb on Feb 12, 2022 at 3:34 pm  [-]
Very easy to implement - still using code I wrote 8 years ago, which is like 20 lines client and server, choosing it at the time over ws.

Essentially just new EventSource(), text/event-stream header, and keep conn open. Zero dependencies in browser and nodejs. Needs no separate auth.


oneweekwonder on Feb 12, 2022 at 3:33 pm  [-]
Personally i use mqtt over websockets, paho[0] is a good js library. It support last will for dc's and the message queue design makes it easy to think of and debug. There also a lot of mq brokers that will scale well.

[0]: https://www.eclipse.org/paho/index.php?page=clients/js/index...


alin23 on Feb 12, 2022 at 10:43 pm  [-]
ESPHome (an easy to use firmware for ESP32 chips) uses SSE to send sensor data to subscribers.

I made use of that in Lunar (https://lunar.fyi/#sensor) to be able to adjust monitor brightness based on ambient light readings from an external wireless sensor.

At first it felt weird that I have to wait for responses instead of polling with requests myself, but the ESP is not a very powerful chip and making one HTTP request every second would have been too much.

SSE also allows the sensor to compare previous readings and only send data when something changed, which removes some of the complexity with debouncing in the app code.


rough-sea on Feb 12, 2022 at 7:16 pm  [-]
A complete SSE example in 25 lines on Deno Deploy: https://dash.deno.com/playground/server-sent-events

waylandsmithers on Feb 12, 2022 at 6:17 pm  [-]
I had the pleasure of being forced to use in SSE due to working with a proxy that didn't support websockets.

Personally I think it's a great solution for longer running tasks like "Export your data to CSV" when the client just needs to get an update that it's done and here's the url to download it.


sb8244 on Feb 12, 2022 at 3:09 pm  [-]
I can’t find any downsides of SSE presented. My experience is that they’re nice in theory but the devils in the details. The biggest issue being that you basically need http/2 to make them practical.

bullen on Feb 12, 2022 at 3:14 pm  [-]
Absolutely not, HTTP/1.1 is the way to make SSE fly:

https://github.com/tinspin/rupy/wiki/Comet-Stream

Old page, search for "event-stream"... Comet-stream is a collection of techniques of which SSE is one.

My experience is that SSE goes through anti-viruses better!


anderspitman on Feb 12, 2022 at 7:17 pm  [-]
Take this for what it's worth, but I see you share rupy on pretty much every thread that mentions WebSockets, and I click on the link pretty much every time, and I still have basically no idea what it is. Documentation probably isn't your priority at the moment, but even just a couple paragraphs could go a long way.

ByThyGrace on Feb 12, 2022 at 7:39 pm  [-]
I had the same impression as you. I want to learn more about fuse but even their "sales pitch" page is in the same tone of "fuse can do a lot" (and that's fine, I'm sold!) except there is very little documentation at the moment.

bullen on Feb 13, 2022 at 12:54 am  [-]
I know, I just go by "the code is so small, you should have time to read it".

rupy is a minimalist, from scratch, HTTP app-server that uses non-blocking IO so it can scale comet-stream (SSE or not) which is much better than WebSockets: https://news.ycombinator.com/item?id=30313403

I will never make projects that you just download and double click to run.

I want my users to understand how it works more than I want them to use it!

Or maybe I'm just lazy... :S


mwcampbell on Feb 12, 2022 at 4:05 pm  [-]
> My experience is that SSE goes through anti-viruses better!

Hmm, another commenter says the opposite:

https://news.ycombinator.com/item?id=30313692


bullen on Feb 12, 2022 at 4:36 pm  [-]
He just needs to push more data on the reply to force the anti-virus to flush the data. Easy peasy.

anderspitman on Feb 12, 2022 at 7:04 pm  [-]
In some cases you might actually be better served sticking with HTTP/1.1 and serving SSE over several domains, to avoid HTTP/2 head-of-line blocking.

U1F984 on Feb 12, 2022 at 10:44 pm  [-]
The extra setup step for websocket should not be required: https://caddyserver.com/docs/v2-upgrade#proxy

I also had no problems with HAProxy, it worked with websockets without any issues or extra handling.


francislavoie on Feb 13, 2022 at 1:13 am  [-]
That's correct, just `reverse_proxy` alone is enough. The request matcher is only needed if you want to make the same request paths get proxied to your HTTP upstream if it doesn't have those websocket connection headers. But if you're always using a path like `/ws` for websockets then you don't need to match on headers.

ponytech on Feb 12, 2022 at 9:59 pm  [-]
One problem I had with WebSockets is you can not set custom HTTP headers when opening the connection. I wanted to implement a JWT based authentication in my backend and had to pass the token either as a query parameter or in a cookie.

Anyone knows the rationale behind this limitation?


charlietran on Feb 12, 2022 at 10:48 pm  [-]
The workaround/hack is to send your token via the "Sec-WebSocket-Protocol" header, which is the one header you're allowed to set in browser when opening a connection. The catch is that your WebSocket server needs to echo this back on a successful connection.

goodpoint on Feb 12, 2022 at 3:08 pm  [-]
--- WebSockets cannot benefit from any HTTP feature. That is:

    No support for compression
    No support for HTTP/2 multiplexing
    Potential issues with proxies
    No protection from Cross-Site Hijacking
---

Is that true? The web never cease to amaze.


__s on Feb 12, 2022 at 3:13 pm  [-]
WebSockets support compression (ofc, the article goes on to detail this & point out flaws. I'd argue that compression is not generally useful in web sockets in the context of many small messages, so it makes sense to be default-off for servers as it's something which should be enabled explicitly when necessary, but the client should be default-on since the server is where the resource usage decision matters)

I don't see why WebSockets should benefit from HTTP. Besides the handshake to setup the bidirectional channel, they're a separate protocol. I'll agree that servers should think twice about using them: they necessitate a lack of statelessness & HTTP has plenty of benefits for most web usecases

Still, this is a good article. SSE looks interesting. I host an online card game openEtG, which is far enough from real time that SSE could potentially be a way to reduce having a connection to every user on the site


bullen on Feb 12, 2022 at 3:18 pm  [-]
The problem with WebSockets is that hey are:

1) More complex and binary so you cannot debug them as easily, specially on live and specially if you use HTTPS.

2) The implementations don't parallelize the processing, with Comet-Stream + SSE you just need to find a application server that has concurrency and you are set to scale on the entire machines cores.

3) WebSockets still have more problems with Firewalls.


quickthrower2 on Feb 12, 2022 at 4:26 pm  [-]
Is it worth upgrading a long polling solution to SSE? Would I see much benefit?

What I mean by that is client sends request, server responds in up to 2 minutes with result or a try again flag. Either way client resends request and then uses response data if provided.


bullen on Feb 12, 2022 at 4:28 pm  [-]
Yes, since IE7 is out of the game long-polling is no longer needed.

Comet-stream and SSE will save you alot of bandwidth and CPU!!!


layer8 on Feb 12, 2022 at 8:04 pm  [-]
What is particular about IE7? According to https://caniuse.com/eventsource, SSE is unsupported through IE11.

bullen on Feb 13, 2022 at 12:25 am  [-]
You don't need to use Event-Source to use SSE, look at how I implemented it here:

https://github.com/tinspin/fuse/blob/master/res/play.html#L1...

The XHR ready state 3 was wrongly implemented in IE7, they fixed it in IE8.


nickjj on Feb 12, 2022 at 5:06 pm  [-]
This is why I really really like Hotwire Turbo[0] which is a back-end agnostic way to do fast and partial HTML based page updates over HTTP and it optionally supports broadcasting events with WebSockets (or SSE[1]) only when it makes sense.

So many alternatives to Hotwire want to use WebSockets for everything, even for serving HTML from a page transition that's not broadcast to anyone. I share the same sentiment as the author in that WebSockets have real pitfalls and I'd go even further and say unless used tastefully and sparingly they break the whole ethos of the web.

HTTP is a rock solid protocol and super optimized / well known and easy to scale since it's stateless. I hate the idea of going to a site where after it loads, every little component of the page is updated live under my feet. The web is about giving users control. I think the idea of push based updates like showing notifications and other minor updates are great when used in moderation but SSE can do this. I don't like the direction of some frameworks around wanting to broadcast everything and use WebSockets to serve HTML to 1 client.

I hope in the future Hotwire Turbo alternatives seriously consider using HTTP and SSE as an official transport layer.

[0]: https://hotwired.dev/

[1]: https://twitter.com/dhh/status/1346095619597889536?lang=en


havkom on Feb 12, 2022 at 4:40 pm  [-]
The most compatible technique is long polling (with a re-established connection after X seconds if no event). Works suprisingly well in many cases and is not blocket by any proxies.

bullen on Feb 12, 2022 at 4:46 pm  [-]
long-polling are blocked to almost exactly the same extent as comet-stream and SSE. The only thing you have to do is to push more data on the response so that the proxy is forced to flush the response!

Since IE7 is no longer used we can bury long-polling for good.


notreallyserio on Feb 12, 2022 at 6:21 pm  [-]
How much more data do you have to send? Is it small enough you aren't concerned about impacting user traffic quotas?

bullen on Feb 12, 2022 at 10:43 pm  [-]
Just enough to trigger the buffer... 1024-8192 bytes or something like that... a fart in space since it's just once per session!

mst on Feb 13, 2022 at 6:19 pm  [-]
Reminds me of needing to do similar things to convince browsers to render e.g. a custom 404 response.

Mildly annoying, but hardly onerous.


laerus on Feb 12, 2022 at 7:38 pm  [-]
With WebTransport around the corner I don't think is worth the time investing in learning a, what seems to me, obsolete technology. I can understand it for already big projects working with SSE that don't want to pay the cost of upgrading/changing but for anything new I cannot be bothered since Websockets work good enough for my use cases.

What worries me though is the trend of dismissal of newer technologies as being useless or bad and the resistance to change.


slimsag on Feb 12, 2022 at 7:41 pm  [-]
I'm confused, you believe that web developers have a trend of dismissing newer technologies and resistance to change? Have I missed something or..?

jessaustin on Feb 12, 2022 at 8:08 pm  [-]
Around the corner? There seems to be nothing about this in any browser. [0] That would put this what, five years out before it could be used in straightforward fashion? Please be practical.

[0] https://caniuse.com/?search=webtransport


0xbkt on Feb 12, 2022 at 8:57 pm  [-]
jessaustin on Feb 12, 2022 at 9:27 pm  [-]
Thanks for pointing that out. I suppose caniuse missing out on the first experimental version of the first browser to support this isn't terribly misleading. Maybe when they get the basics of the API figured out we can start deprecating other things...

coder543 on Feb 12, 2022 at 7:58 pm  [-]
WebTransport seems like it will be significantly lower level and more complex to use than SSE, both on the server and the client. To say that this "obsoletes" SSE seems like a serious stretch.

SSE runs over HTTP/3 just as well as any other HTTP feature, and WebTransport is built on HTTP/3 to give you much finer grained control of the HTTP/3 streams. If your application doesn't benefit significantly from that control, then you're just adding needless complexity.


lima on Feb 12, 2022 at 3:25 pm  [-]
One issue with SSE is that dumb enterprise middleboxes and Windows antivirus software break them :(

They'll try to read the entire stream to completion and will hang forever.


bullen on Feb 12, 2022 at 3:29 pm  [-]
I managed to get through almost all middle men by using 2 tricks:

1) Push a large amount of data on the pull (the comet-stream SSE never ending request) response to trigger the middle thing to flush the data.

2) Using SSE instead of just Comet-Stream since they will see the header and realize this is going to be real-time data.

We had 99.6% succes rate on the connection from 350.000 players from all over the world (even satellite connections in the Pacific and modems in Siberia) which is a world record for any service.


Matheus28 on Feb 12, 2022 at 4:12 pm  [-]
While 350k simultaneous connections is nice, I'd be extremely skeptical of that being any kind of world record

bullen on Feb 12, 2022 at 4:38 pm  [-]
The world record is not the 1.100 concurrent users per machine (T2 small then medium on AWS) we had at peak, but the 99.6% connections we managed. All other multiplayer games have ~80% if they are lucky!

350.000 was the total number of players during 6 years.


ta-sus on Feb 13, 2022 at 4:31 am  [-]
Two nines and a world record, get this man a trophy!

on Feb 12, 2022 at 4:37 pm  [-]
[[ deleted ]]

wedn3sday on Feb 12, 2022 at 6:36 pm  [-]
This seems fairly cool, and I appreciate the write up, but god I hate it so much when people write code samples that try and be fancy and use non-code-characters in their code samples. Clarity is much more important then aesthetics when it comes to code examples, if Im trying to understand something I've never seen before, having a bunch of extra non-existant symbols does not help.

DHowett on Feb 12, 2022 at 6:42 pm  [-]
I’m guessing that you are referring to the “coding ligatures” in the author’s font selection for code blocks?

You can likely configure your user agent to ignore site-specified fonts.


loh on Feb 12, 2022 at 6:49 pm  [-]
Are you referring to the `!==` and `=>` in their code being converted to what appears to be a single symbol?

Upon further inspection, it looks like the actual code on the page is `!==` and `=>` but the font ("Fira Code") seems to be somehow converting those sequences of characters into a single symbol, which is actually still the same number of characters but joined to appear as a single one. I had no idea fonts could do that.


red_trumpet on Feb 12, 2022 at 8:13 pm  [-]
That's called a ligature[1], and clasically used for joining for example ff or fi into more readable symbols.

[1] https://en.wikipedia.org/wiki/Ligature_(writing)


Rebelgecko on Feb 12, 2022 at 6:42 pm  [-]
Which characters, the funky '≠'? I've seen those pop up a few other times recently, which makes me wonder if there's some editor extension that just came out that maps != and !==

asiachick on Feb 12, 2022 at 6:41 pm  [-]
Agreed. I use them in my editor but I ban them from my blog posts. They aren't helpful to others

Too on Feb 12, 2022 at 5:45 pm  [-]
Can someone give a brief summary of how this differs from long polling. It looks very similar except it has a small layer of formalized event/data/id structure on top? Are there any differences in the lower connection layers, or any added support by browsers and proxies given some new headers?

What are the benefits of SSE vs long polling?


TimWolla on Feb 12, 2022 at 5:47 pm  [-]
> What are the benefits of SSE vs long polling?

The underlying mechanism effectively is the same: A long running HTTP response stream. However long-polling commonly is implemented by "silence" until an event comes in and then performing another request to wait for the next event, whereas SSE sends you multiple events per request.


anderspitman on Feb 12, 2022 at 7:03 pm  [-]
SSE doesn't support binary data. Text only.

TimWolla on Feb 12, 2022 at 5:45 pm  [-]
> RFC 8441, released on September 2018, tries to fix this limitation by adding support for “Bootstrapping WebSockets with HTTP/2”. It has been implemented in Firefox and Chrome. However, as far as I know, no major reverse-proxy implements it.

HAProxy supports RFC 8441 automatically. It's possible to disable it, because support in clients tends to be buggy-ish: https://cbonte.github.io/haproxy-dconv/2.4/configuration.htm...

Generally I can second recommendation of using SSE / long running response streams over WebSockets for the same reasons as the article.


rcarmo on Feb 12, 2022 at 4:39 pm  [-]
I have always preferred SSE to WebSockets. You can do a _lot_ with a minuscule amount of code, and it is great for updating charts and status UIs on the fly without hacking extra ports, server daemons and whatnot.

axiosgunnar on Feb 12, 2022 at 3:45 pm  [-]
So do I understand correctly that when using SSE, the login cookie of the user is not automatically sent with the SSE request like it is with all normal HTTP requests? And I have to redo auth somehow?

bastawhiz on Feb 12, 2022 at 4:06 pm  [-]
It should automatically send first party cookies, though you may need to specify withCredentials.

KaoruAoiShiho on Feb 12, 2022 at 6:01 pm  [-]
I have investigated SSE for https://fiction.live a few years back but stayed with websockets. Maybe it's time for another look. I pay around $300 a month for the websocket server, it's probably not worth it yet to try to optimize that but if we keep growing at this rate it may soon be.

sysid on Feb 25, 2022 at 8:59 pm  [-]
For starlette/fastapi there is a battle tested lib: https://github.com/sysid/sse-starlette

tgv on Feb 12, 2022 at 5:48 pm  [-]
But SSE is a oneway street, isn’t it? The client gets one chance to send days, and that’s it? Or is there some way around it?

jessaustin on Feb 12, 2022 at 9:12 pm  [-]
Clients can always send normal http messages to the server. Probably not ideal for "bi-directional" traffic, but it's an option in a pinch.

llacb47 on Feb 12, 2022 at 3:30 pm  [-]
Google uses SSE for hangouts/gchat.

ravenstine on Feb 12, 2022 at 3:45 pm  [-]
I usually use SSEs for personal projects because they are way more simple than WebSockets (not that those aren't also simple) and most of the time my web apps just need to listen for something coming from the server and not bidirectional communication.

andrew_ on Feb 13, 2022 at 1:00 am  [-]
EventSource has been around for eons, and is what the precursor to webpack-dev-server used for HMR events. It had the advantage of supporting ancient browsers since the spec has been around a long time and even supported by oldIE.

mterron on Feb 13, 2022 at 2:49 am  [-]
I've found hasses (https://github.com/hyper-prog/hasses) a really nice SSE server.

Good performance, easy to use, easy to integrate.


captn3m0 on Feb 12, 2022 at 4:22 pm  [-]
I think SSE might make a lot of sense for Serverless workloads? You don't have to worry about running a websocket server, any serverless host with HTTP support will do. Long-polling might be costlier though?

jFriedensreich on Feb 12, 2022 at 4:27 pm  [-]
this is what i have been telling people for years, but its hard to get the word out there. usually every dev just reflexes without thinking to websockets when anything realtime or push related comes up.

pbowyer on Feb 12, 2022 at 8:44 pm  [-]
There's also the Mercure protocol, built on top of Server-Sent Events: https://mercure.rocks/

toomim on Feb 12, 2022 at 11:51 pm  [-]
And the Braid protocol, using a variation on SSE: https://braid.org

beebeepka on Feb 12, 2022 at 3:20 pm  [-]
So, what are the downsides to using websockets? They are my go-to solution when I am doing a game, chat, or something else that needs interactivity.

bullen on Feb 12, 2022 at 3:22 pm  [-]
herodoturtle on Feb 12, 2022 at 4:23 pm  [-]
Been reading all your comments on this thread (thank you) with interest.

Can you recommend some resources for learning SSE in depth?


bullen on Feb 12, 2022 at 4:40 pm  [-]
I would look at my own app-server: https://github.com/tinspin/rupy

It's not the most well documented but it's the smallest implementation while still being one of the most performant so you can learn more than just SSE.


jshen on Feb 12, 2022 at 10:52 pm  [-]
Question for those of you who build features on web using things like SSE or web sockets, how do you build those features in native mobile apps?

johnny22 on Feb 12, 2022 at 10:58 pm  [-]
isn't that just an event dispatcher?

jshen on Feb 13, 2022 at 12:41 am  [-]
Huh? web sockets and SSE are part of the browser standards. When you build an ios or android app you aren't typically building against a browser. Do people typically build completely different solutions for ios and android compared to web?

njx on Feb 12, 2022 at 7:37 pm  [-]
My theory why SSE did not take off is because WordPress does not support it.

pictur on Feb 12, 2022 at 3:16 pm  [-]
Does SSE offer support for capturing connect/disconnect situations?

bullen on Feb 12, 2022 at 3:21 pm  [-]
The TCP stack can give you that info if you are lucky in your topography but generally you cannot rely on this working 100%.

The way I solve it is to send "noop" messages at regular intervals so that the socket write will return -1 and then I know something is off and reconnect.


anderspitman on Feb 12, 2022 at 7:07 pm  [-]
My personal browser streaming TL;DR goes something like this:

* Start with SSE

* If you need to send binary data, use long polling or WebSockets

* If you need fast bidi streaming, use WebSockets

* If you need backpressure and multiplexing for WebSockets, use RSocket or omnistreams[1] (one of my projects).

* Make sure you account for SSE browser connection limits, preferably by minimizing the number of streams needed, or by using HTTP/2 (mind head-of-line blocking) or splitting your HTTP/1.1 backend across multiple domains and doing round-robin on the frontend.

[0]: https://rsocket.io/

[1]: https://github.com/omnistreams/omnistreams-spec


whazor on Feb 12, 2022 at 3:02 pm  [-]
I tried out server side events, but they are still quite troubling with the lack of headers and cookies. I remember I needed some polyfill version which gave more issues.

bullen on Feb 12, 2022 at 3:15 pm  [-]
How do you mean lack of headers and cookies?

That is wrong. Edit: Actually it seems correct (a javascript problem, not SSE problem) but it's a non-problem if you use a parameter for that data instead and read it on the server.


tytho on Feb 12, 2022 at 3:31 pm  [-]
You cannot send custom headers when using the built-in EventSource[1] constructor, however you can pass the ‘include’ value to the credentials option. Many polyfills allow custom headers.

However you are correct that if you’re not using JavaScript and connecting directly to the SSE endpoint via something else besides a browser client, nothing is preventing anyone from using custom headers.

[1] https://developer.mozilla.org/en-US/docs/Web/API/EventSource...


bullen on Feb 12, 2022 at 3:35 pm  [-]
Aha, well why do you need to send a header when you can just put the data on the GET URL like so "blabla?cookie=erWR32" for example?

In my example I use this code:

        var source = new EventSource('pull?name=one');
        source.onmessage = function (event) {
           document.getElementById('events').innerHTML += event.data;
        };

tytho on Feb 12, 2022 at 3:52 pm  [-]
I think that works great! The complaint I’ve heard is that you may need to support multiple ways to authenticate opening up more attack surface.

kreetx on Feb 12, 2022 at 3:53 pm  [-]
What if you use http-only cookies?

tytho on Feb 12, 2022 at 3:56 pm  [-]
You can pass a ‘withCredentials’ option.

withinboredom on Feb 12, 2022 at 3:34 pm  [-]
I’m pretty sure I saw him sending headers in the talk. Did you watch the talk?

tytho on Feb 12, 2022 at 3:50 pm  [-]
He was likely using a polyfill. It’s definitely not in the spec and there’s an open discussion about trying to get it added: https://github.com/whatwg/html/issues/2177

The_rationalist on Feb 12, 2022 at 4:49 pm  [-]
for bidi Rsocket is much better than wevsocket, in fact its official support is the best feature of spring boot

Show more