

Same. The only issue I’ve had is it not finding my TV shows, but once I figured out how it wants them stored, no issues whatsoever.
Mama told me not to come.
She said, that ain’t the way to have fun.
Same. The only issue I’ve had is it not finding my TV shows, but once I figured out how it wants them stored, no issues whatsoever.
Nearly every streaming service you use is transcoding on the fly instead of storing 20versions of each video
If you’re talking about commercial streaming services like Netflix, I highly doubt that. If you’re talking about self-hosted services like Plex, then you’re absolutely right.
The app on my LG TV is acceptable, but does have random problems, like it can’t connect over TLS, and it’s kinda slow to navigate. But it works, and my kids know how to work it.
Yeah, it’s old, but it took effect this month for existing customers, hence why I noticed.
Thanks for the suggestion. Any issues with the transfer speeds?
Esp. for CPU temps. A much more interesting range is 50-100C.
That’s fair.
That said, I can’t think of anything I’d want to run that doesn’t work in docker, except maybe pf? But I’d probably put that on a dedicated machine anyway. Pretty much everything else runs on Linux or has a completely viable Linux alternative, so I could easily built a docker image for it.
I don’t use proxmox, so I guess I don’t understand the appeal. I don’t see any reason to backup a container or a VM, I just backup configs and data. Backing up a VM makes sense if you have a bunch of customizations, but that’s pretty much the entire point of docker, you quarantine your customizations to your configs so it’s completely reproducible if you have the configs and data.
I don’t use proxmox, but it works absolutely fine for me on my regular Linux system, which has a firewall, some background services, etc. Could you be more specific on the issues you’re running into?
Also, I only really expose two services on my host:
Everything else just connects through an internal-only docker network.
If you’re getting conflicts, I’m guessing you’ve configured things oddly, because by default, docker creates its own virtual interface to explicitly not interfere with anything else on the host.
You don’t have to revert 8 services, you can stop/start them independently: docker compose stop <service name>
.
This is actually how I update my services, I just stop the ones I want to update, pull, and restart them. I do them one or two at a time, mostly to mitigate issues. The same is true for pulling down new versions, my process is:
docker compose up -d
brings up any stopped services using new image(s)I do this whenever I remember, and it works pretty well.
I’m guessing people are largely using the wrong terminology for things that make more sense, like backing up/snapshotting config and data that containers use. Maybe they’re also backing up images (which a lot of people call “containers”), just in case it gets yanked from wherever they got it from.
That said, yeah, someone should write a primer on how to use Docker properly and link it in the sidebar. Something like:
docker run
)<image>:<major>.<minor>.<patch>
and <image>:<major>
, etc, and updating images (i.e. what happens when you “pull”)I’ve been using docker for years, but I’m sure the are some best practices I am missing since I’m more of a developer than a sysadmin.
Has anyone tried https://github.com/hickory-dns/hickory-dns? It seems to be a complete DNS server instead of what looks like a bunch of bash config for a standard Linux tool. There are block lists you can configure as well, and it supports pretty much everything.
It’s way overkill, but hey, why not?
I don’t use pihole, but everything I use is pinned by major release version. No problem yet with surprise breakage.
Yeah, I think so. I’m not interested in addons anyway.
Looks like 9? Here’s what I’m currently running:
The rest are databases and other auxiliary stuff. I’m probably going to work on it some this holiday break, because I’d like to eventually move to microOS, and I still have a few things running outside of containers that I need to clean up (e.g. Samba).
But yeah, like others said, it really doesn’t matter. On Linux (assuming you’re not using Docker Desktop), a container is just a process. On other systems (e.g. Windows, macOS, or Linux w/ Desktop), they run in a VM, which is a bit heavier and reserves more resources for itself. I could run 1000 containers and it really wouldn’t matter, as long as they’re pretty light.
Yeah, I did have a to transcode a bluray rip, but I think that might be a network limitation rather than a processing one. 1080p transcode worked fine, so it’s not resolution.
One of these days I’ll DIY a HTPC, but for now, the Jellyfin app works acceptably well.