can I take it too far .. enough?

No images? Click here

  • Code BEAM Stockholm Lite vlog is done. 
  • Did my last client work yesterday. Now we do vacation. All of July and some of August. Sweden style. It will start mildly with visiting a zoo type deal with the family and end with barefoot hippies and fire up north. As it should.
  • BEAM Radio should be a bit infrequent during summer. It has had gaps but those are unintended, gaps during July? Intended.
  • Preparing a talk for the big swedish dev conference Øredev. The theme and premise for this one is really interesting and right up my alley:

    ---

    If the only option was to start over from scratch, what would you do differently?
    What is “Halt and catch fire” about? This year for Øredev 17th Edition we decided to explore how differently tech would shape the world if the only option was to restart from scratch.

    ---

    You can find my blurb here.
 

Standard bag of tricks

Task.async_stream

The Elixir standard library has a lot of nice and useful stuff in it. In your day-to-day I usually use the Enum module a ton. I rarely touch the Stream module but it is there if I need it. Enum.map is an every day occurrence. A super familiar API.

The fun thing about mapping through an enumerable is that each application of the mapper function has no dependence on the others. Unless you do crimes. This means it is trivial to parallelize. Shove a Task.async into the mapper function and then Task.await_many later in the pipeline. Right?

Yeah, sure. Or just use Task.async_stream which does that, more gracefully, lets you get results out of order if order doesn't matter, turns it into a stream so you can do some extra elegant things with it, allows controlling the amount of concurrency imposed on the system.

It feels like a function added just to make a brutal flex of a demo for the language. "So we have this list we are mapping over. Let's do so concurrently."

     

    To Code BEAM & Back Again

    I've gone to every Code BEAM Stockholm I have been able to since I started Elixir. I like conferences and the reworked Erlang-focused conference has widened beautifully into a more complex mixture of Erlang, Elixir and occasionally other things.

    Help me share it? Mastodon, Twitter, LinkedIn

    Thumbnail of me with caption "Is one day of conference worth travelling for?"

    Watch it on YouTube

     

    My own personal fly deploy

    This is an idea that might be a bit bad but I need to try to get it out of my head. To some extent it is certainly a stunt. But stunts are worthwhile occasionally. Both in showing capabilities and expanding our horizons.

    Elixir and Erlang are incredibly capable at absorbing tasks that would be external infrastructure elsewhere. Erlang was built to run the entire system and takes on a lot of higher-level duties that there are OS-level analogs for. Elixir has the same capabilities and we see some of it repeat in what has been built in the ecosystem.

    Oban for example implements a very nuanced and particular persistent queueing system that can live inside of your application instead of as a separate app with separate infrastructure. Another library I really like is SiteEncrypt which gives your Phoenix app the LetsEncrypt song and dance as a part of your app. SiteEncrypt doesn’t matter if you are deploying on a Platform-as-a-Service that will definitely offer TLS termination and automatic certification-generation. But for custom deployments on VPS:es or similar it is really quite delightful.

    Now what I want are two mix tasks.

    mix blargh.deploy - Using our non-existing deployment-library blargh, build and deploy the current app. What would a deploy be?

    1. Build a release.
    2. Archive it, hash it and sign it with a shared key.
    3. As an HTTP file upload, push the release to the default (or provided arg) target server where you app lives.
    4. App receives the new version of itself.
    5. Verifies signature and hash.
    6. Unpacks in a new directory, probably blue/green style.
    7. Runs startup for new application, migrations, etc.
    8. Start new application, configured to share any bound ports so it can start taking on web traffic.
    9. Check health of new install with some convention.
    10. Shut itself down.

    This should raise a bunch of questions and I promise I have not ironed them all out. I have implemented pretty much none of this but it is largely not controversial. There are devils in the details. Anyway. Theory-crafting here.

    What do we get? Zero-downtime deployments on a single host, even a physical machine if you like (I like). We also get the opportunity for the app to act on information about the deployment. We could build deployment dashboards and version information right into our app. It is weird. It is not uniquely possible on the BEAM. But it it makes more sense on the BEAM as it can be your entire application.

    In other ecosystems you invariably run a bunch of extra generic infra. Which isn’t bad. I like nginx, haproxy, redis and such. Great software. Doesn’t mean I always want them around.

    mix blargh.seed - Given some SSH credentials, deploy a version of the app to a bare server that is only the part that receives deployments and enough config to start it up. The goal being that you can then deploy your app.

    You’ll notice I haven’t mentioned containers and I’m probably stupid to want to do it without them. I’m pretty keen to try and not introduce a separate chunk of software outside the Elixir and Erlang stuff you already use for development. I don’t mind using whatever is there for your environment and this is one tricky thing. As we try to seed the server, are we under Debian, Ubuntu, Alpine, RHEL, FreeBSD, MacOS? I could constrain it to what I typically use but I’d ideally at least want it to be able to identify other targets and be extendable to other targets.

    This also raises a question about how the app is run and kept running. Shutdowns can come suddenly, how do we ensures it starts on a restart or if it gets killed by the OOM killer? Systemd? Systemd is only on some systems. Again, a case of figuring out what is there and doing the right thing which gets really quite tricky. To keep running I think the Erlang :heart module could help. Nerves does something like that. But ensuring it gets started at all is a bit of a question mark. Targeting linux and systemd is probably the pragmatic choice and having a way to identify unsupported configs and ask for contribs to cover more. Make a simple API over it and have people contribute their way of the ninja.

    Generally a deploy would run from CI in my mind but during early stages of new projects I love being able to fire off a deploy from my terminal if I don't have CI ready to go. This is where `fly deploy` under fly.io is really nice.

    Known problem-areas:

    • Environment variables and secret management need to be updated as the new release is put into place and such.
    • Releases needing to be built on the same arch and ~OS they are to be run on.
    • Containers would solve some problems but introduce others.
    • Health checks are harder than I hand-waved
    • No goddarn server OSes work the same in bringing in deps.

    As someone who is not particularly scared of servers and am constantly a bit miffed when dealing with big clouds and platforms the idea of having an app figure out how to deploy itself on a basic server and bringing in no extra tools beyond a library to do it really appeals to me.

    Are you tempted or disgusted? Intrigued or concerned?  I'd love more input on this so email or poke me on the fedi @lawik@fosstodon.org.

    Thank you for reading. I appreciate you spending some of your moments on this.

     
     

    This is an email from Underjord, a swedish consultancy run by Lars Wikman.

    Everything else is found at underjord.io

    You signed up for this newsletter and confirmed the subscription. If you want to stop receiving it. Just use the link below.

    Preferences  |  Unsubscribe