I’m talking not only about trusting the distribution chain but about the situation where some services dont rebuild their images using updated bases if they dont have a new release.
So per example if the particular service latest tag was a year ago they keep distributing it with a year old alpine base…
Rebuilding containers is trivial if they supply the dockerfile. Then the base image is up to date, and you can add any updates/patches for things like the recent react vuln.
Not currently, but am planning on getting to it in 2026. I want to pull things to my Forgejo and use some workflows there to scan for vulnerabilities amd rebuild’n tweak images i deem necessary. It will be a fun project.
Almost never. I don’t see any benefit .
No. I only have a limited amount of time for maintaining my home infrastructure. I chose my battles.
i do look out for new images that could be a drop in replacement
the new no-distro builds of containers is very interesting
rn I’m only using docker for the services I have behind a VPN, so I don’t really put that much thought into securing them. If I had any publicly accessible ones I would setup an automatic patch or even build my custom images.
And as always I’m trying to up my security game, but not at any cost
I didn’t realise this was a problem.
I’m not too worried about it though.
each container has such a small attack surface. As in, my reverse proxy traefik exposes port 80 and port 443, and all the others only expose their API’s or webservers to traefik.
Rebuild: no. If the software itself is unmaintained, it gets replaced.
Patch: yes. If the base image contains vulnerabilities that can be fixed with a package update, then that gets applied. The patch size and side effects can be minimized by using copacetic, which can ingest Trivy scan results to identify vulnerabilities.
There’s also repos like Chainguard and Docker hardened images which are handy for getting up to date images of commonly used tools.
I don’t think a year old base is bad. Unless there’s an absolutely devastating CVE in something like the network stack or a particular shared library, any vulnerabilities in it will probably be just privilege escalations that wouldn’t have any effect unless you were allowing people shell access to the container. Obviously, the application itself can have a vulnerability, but that would be the case regardless of base image.
No
I did it only once (yet) because i needed a specific addon for the software.
In my case, I wanted to use caddy webserver with a specific plugin. It was quite easy to create a new image exactly the way i wanted it.
All the time. There’s a lot of cves in old premade docker containers.
I don’t know enough about code to verify things myself. And I assume this applies for a lot of us here. So I just pray that nothing’s fucked in the distribution chain.
I’m also in this category, but OP is talking about something else.
Like if you use container-x, which has an alpine base. If it hasn’t released a new version in several years then you’re using a several year old alpine distro.
I didn’t really realise this was a thing.
Ah, I have no idea what that is. I thought OP meant building stuff directly from Github (e.g. Ungoogled Chromium). Thanks for the clarification! :)
Containers have layers. So if you create an instance of a syncthing container whoever built that container would have started with some other container. Alpine linux is a very popular base layer, just used as an example in this discussion.
When you download an image, all the layers underlying the application that you actually wanted, will only be as fresh as the last time the maintainer built that image. So if there were a bug in the alpine base, that might have been fixed in alpine, but wouldn’t by pushed through to whatever you downloaded.
If you care about security you build it is own. No need to trust random dude in the internet. After all It just fire and forget. Copy whatever “code” is used to build container you are after, verify it once and than just rebuild it periodically to pull patches from more reliable sources.
Docker security is a joke, no need to make it worse.Yes, because I mostly like to have my services built in a Debian container inside my Proxmox environment. If I’m running it in Docker, there’s a good chance it’s temporary/PoC, and in that case I do not rebuild or anything, I run it for whatever purpose it serves and then it either goes away or gets migrated to a handcrafted Debian container.
I’ve never rebuilt a container, but I also don’t have any containers that are deprecated status either. I swap off to alternatives when a project hits deprecation or abandonware status.
My only deprecated container I currently have is filebrowser, I’m still seeking alternatives and have been for awhile now but strangely enough it doesn’t seem there are many web UI file management containers.
As such though ever since I learned that the project was
abandonedon life support(the maintainer has said they are doing security patches only, and that while they are doing more on the project currently, that could change), the container remains off, only activating it when i need to use it.File browser Quantum is quite the popular replacement, if you don’t need any of the things it hasn’t implemented yet, and especially if you enjoy all the new things it can do!
Quantum! 🏁










