Presented by

  • Tim Serong

    Tim Serong
    @tserong
    https://ourobengr.com/

    Tim spends most of his time working for SUSE, hacking on Ceph and related technologies. He has spoken about high availability and distributed storage at several previous LCAs. In his spare time he wrangles pigs, chickens, sheep and ducks, and was declared by one colleague "teammate most likely to survive the zombie apocalypse".

Abstract

The Ceph project switched from installing regular software packages, to deployment as application containers (Podman/Docker/Kubernetes) in the most recent release. Suddenly, we have storage clusters where the admin isn't dealing with a bunch of normal daemons anymore; instead there's a whole lot of containers running, and if you're used to the old way of doing things, the container mode can be inscrutable, and difficult to debug when things go wrong. This isn't just a story for Ceph people - this is a story of unexpected failures, of learning where to look when things break, of trying to fix those broken things, and then discovering someone's hidden half your trusty old tools, and the other half don't work properly anymore. Ceph users will come away from this talk with a good understanding of how everything is deployed now, and why. Beyond that though, I hope to ensure that anyone who's gone from dealing with packaged software to the container world never finds themselves like I did, late one night, staring at a terminal into which they've just typed the words "containers are hideously undebuggable black boxes and we never should have invented them".