• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle





  • I thought about setting one up for my main server because every time the power went out I’d have to reconfigure the bios for boot order, virtualization, and a few other settings.

    I’ve since added a UPS to the mix but ultimately the fix was replacing the cmos battery lol. Had I put one of these together it would be entirely unused these days.

    It’s a neat concept and if you need remote bios access it’s great, but people usually overestimate how useful that really is.





  • I can’t speak for everyone else, but I run about 6 different VMs solely to run different docker containers. They’re split out by use case, so super critical stuff on one VM, *arr stuff on another, etc. I did this so my tinkering didn’t take down Jellyfin and other services for my wife and kids.

    Beyond that I also have two VMs for virtualized pihole running gravity sync on different hosts, and another I intend to use for virtualized opnsense.

    Everything is managed via ansible with each docker project in its own forgejo repo.



  • I’m assuming you installed it directly to the container vs running docker in there?

    I have been debating making the jump from docker in a VM to a container, but I’ve been maintaining Nextcloud in docker the entire time I’ve been using it and not had any issues. The interface can be a little slow at times but I’m usually not in there for long. I’m not sure it’s worth it to have to essentially rearchitect mely setup for that.

    All that aside, I also map an NFS share to my docker container that stores all my files on my NAS. This could be what causes the interface slowness I sometimes see, but last time I looked into it there wasn’t a non hacky way to mount a share to an LXC container, has that changed?


  • Yikes! I pay a couple bucks more for uncapped gigabit. I’m fortunate in that there’s two competing providers in my area that aren’t in cahoots (that I can tell.) I much prefer the more expensive one and was able to get them to match the other’s price.

    My wife has been dropping hints she wants to move to another state though and I’m low key dreading dealing with a new ISP/losing my current plan.


  • I do a separate container for each service that requires a db. It’s pretty baked into my backup strategy at this point where the script I wrote references environment variables for dumps in a way that I don’t have to update it for every new service I deploy.

    If the container name has -dbm on the end it’s MySQL, -dbp is postgres, and -dbs would be SQLite if it needed its own containers. The suffix triggers the appropriate backup command that pulls the user, password, and db name from environment variables in the container.

    I’m not too concerned about system overhead, but I’m debating doing a single container for each db type just to do it, but I also like not having a single point of failure for all my services (I even run different VMs to keep stable services from being impacted by me testing random stuff out.)



  • I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)

    All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.

    It’s only me they does all this though, I set it up this way for funsies.



  • I’ve been running it behind Cloudflare with no issues. I’m also doing it a completely different way than the official docs and the ubergeek method. Mostly because I have a particular way I do my docker stuff.

    Every time something has broken it’s been 100% on me. My favorite way to learn is by breaking things though, so I also have an account on a different instance in case I break mine and have to wait a bit to fix it 😅