I build stuff.


Stuff like:

Live Video

I built a network infrastructure that supported one-to-many broadcasts with less than 700ms of camera-to-monitor latency across the broader internet. Latency was a really interesting problem because it required optimizing every facet of the pipeline; it meant working on image acquisition, video compression, the autoscaling edge-origin network architecture, and the decompression and playback of the video. Each step had to both be fast and parallel to support 4K video at 60 fps, while also taking as little time as possible to pass data off to the next element of the pipeline.

Most video latency comes from the networking protocol used to send the compressed video data over the network to the target computer. Most established live video systems use HLS or RTMP, which have latencies of 3 to 30 seconds. To minimize that latency, I used the SRT protocol for all networking. SRT provides a balance between latency and reliability, so that you can trade off a more responsive stream at the expense of video corruptions becoming more likely. At the time, SRT was a very new protocol that wasn't quite ready for production use, so I worked with the creators to fix some bugs, and to polish a Gstreamer plugin.

The downside, however, to using SRT was the fact that few systems properly supported it. Because of this, we spent a lot of time creating our own SRT-compatible systems. One example of this is our Unity video player which, on Android, was actually more performant than VLC.

Hmu