|
You can do things differently No images? Click here ![]() Check out the videos from Goatmire to get a mind-blowing keynote from Zach Daniel that isn't all about Ash. Or the next thing to fulfill the promise of Sonic Pi courtesy of Sam Aaron. Or visit a hacker's paradise with Nerves-powered drone flight by Damir. The videos are also on YouTube if you prefer. My talk from Oredev this year also dropped. People particularly seemed to appreciate my slides. I would say they are the opposite of AI-generated. A heat gun was involved. White-out/Tippex was used. Liveness and eventsSoft realtime. Also known as consistently low latency. This is one of the promises of the BEAM and it is the reason for why Phoenix is so neat for building "realtime UI" on the web. I talked a bit about the value of events in the previous newsletter and about using the BEAM to do work differently and leverage the power of the platform. Let's continue that theme. The BEAM optimizes a fair bit to avoid anything holding up execution for an extra long time. Meaning when you are doing typical web stuff, talk to the DB, shuffle some data into JSON or a template there is essentially nothing hold you back. On Python or Ruby when contending with the GIL you end up with green threads waiting on other green threads to yield and the event loop to crank another turn. We have fully parallel schedulers so we have more event loops. We also have pre-empting. If a piece of work occupies the scheduler for too long (too many function calls) it gets booted to the back of the line. This is very important because that's where we get the "consistently" in consistently low latencies. If other work in your system makes the UI latency spike frequently you don't have consistently low latencies. You have spiky latency. If all latencies go up at the same time because the system is under massive load you are still at least seeing consistent latencies. There are ways to make latencies way less consistent and you need to be diligent about keeping them low as you do more work. The BEAM is fundamentally quite resilient to your choices though. If you are doing 500ms of work in your rendering pipeline you will have a latency worse than 500ms. But it shouldn't be much worse. So this is why people like it for "realtime". Hard realtime is a different topic usually reserved for motor drivers and safety systems. Consult your local RTOS. Soft realtime mostly means faster than you care about. I've done a fair number of demos and experiments with Membrane. My first real conference talk and later my shortest notice conference talk. Interactive demos with live media processing. Membrane lets you do this. And media is finicky and annoying but what is so neat about Membrane is that every element in the processing pipeline is a GenServer and inspecting any given frame of video or buffer of audio samples you can somehow determine "I want to signal something" and send out a notification or a Phoenix.PubSub broadcast or similar. Meaning you can tell your UI something is happening mid-stream. This is very hard to do well with ffmpeg from Python. I've been poking at ex_nvr to pull keyframes from an IP camera (security camera) video stream and shove them to the side for live inference. From an RTSP stream you can get this really snappy. And you can do the inference off on the side while you keep letting images stream through for display. And you can get as close to real time as your hardware allows. Membrane also exposes an interesting optimization. The BEAM's metaphor of shared-nothing processes and message passing is true enough. Doing that naively is very inefficient. Any slightly larger binary will instead be stored separately in memory and passed by reference instead of copying it around all the time. You don't handle this, the VM does. This means that Membrane doesn't explode in memory usage as you pass frames around. It is quite efficient for how much power it exposes. Phoenix Channels, for JavaScripty frontends and LiveView for pure Elixir let us leverage this heavily. We have events coming from the depths of the system and we can act on them. Speed is a unique quality of a system. Live progress reporting is golden. This is hard stuff to do well in many ecosystems and it is generally assumed with an Elixir application. Why wouldn't you act on events, you don't have to define 4 layers of contracts. You don't put significant extra load on a massive expensive kafkaesque event bus. This is also why there is so much upside in a cohesive Elixir application. The moment a less event-friendly runtime gets involved you need queues and workers. Sometimes that's wortwhile but the cost is way more significant. So enjoy your low latencies, your immediacy, and show it off. Thank you for reading. I appreciate it. |