Hacker News new | past | comments | ask | show | jobs | submit login

Usage, I converted a bunch of containers to it from debian-slim in our fairly large CI/CD setup and it processed workloads noticably slower, despite booting slightly quicker as it's a smaller pull. However, with network speeds nowadays, it was negligible difference and not worth it. Rolled it back to debian-slim.



Would these performance concerns be an issue if you were using alpine "on the metal" and debian containers?

When running complex applications I find it's simplest to "compile" the application into a container thus rendering some tedious complex runtime to a static binary that's trivial to run without worrying about tedious dependency management. It burns a bit of storage but that's not a big deal these days.

If someone suggested to me "hey I want to run a big PHP / nginx / mysql workload for my startup; should I use alpine?" I'd suggest they find a doctor.


We're providing a CI/CD system that supports several different departments and teams with varying technology in their stack and a plethora of different pipelines. Some big java projects, python and loads of other batch jobs like spark etc. If only it was as simple as just running it on bare metal. The issue is with muslc, not the hardware.


I am pretty comfortable with suggesting one OS as the "hardware" OS and another OS for the userland...

Alpine's design makes it really well suited to "hardware"; I'd even suggest it's probably a good way to run kubernetes or lxd because it's simple and trivial to provision/customize and not full of vendor nonsense.

You can use alpine as a "base container" layer, but you'll quickly end up in a world where libc vs musl or "I need a weird package" makes a tiny centos/debian container more appealing. If you've got java or python or ruby or some other complex runtime, just run it in the most commonly used base container and don't go looking for trouble...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: