By DMing me you consent for them to be shared with whomever I wish, whenever I wish, unless you specify otherwise

  • 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle


  • Update went fine on a bare metal install. Customising the webUI port is a little easier now, instead of editing lighttdp.conf I think you can do it in the UI.

    I struggled to find some settings, I looked for ages for the API token. Found it in all settings: expert, scroll for half a mile down the webUI API section.

    Also, struggled with adding CNAMES in bulk, I thought you could do that in the old UI. You might be able to in the new UI. I just 'one by one’d them.

    Docker update went flawlessly.

    I have an lxc and to go which is a task for another day, unless TTeck’s updater beats me to it.



  • My main storage is a mirrored pair of HDD. Versioning is handled here.

    It Syncthings an “important” folder to a local back up only 1 HDD.

    The local Backup Syncthings to my parents house with 1 SSD.

    My setup can be better, if I put the versioning on my local backup it’d free space on my main storage. I could migrate to a dedicated backup software, Borg maybe, over syncthing. But Syncthing I knew and understood when I was slapdashing this together. It’s a problem for future me.

    I’ve been seriously considering an Elitedesk G4 or Dell/Lenovo equivalent as back up machines. Mirrored drives. Enough oomph to HA the things using the “important” files: immich paperless etc.


  • My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I’m using a combination of making their pi be their DHCP and one user is running on avahi.

    Chrome, the people’s browser of choice, really, really hates http so I’m putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn’t matter. But if people are going to be using it then I’ll have to get a more memorable one.

    System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I’ll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.

    Locally works as intended though, so that’s nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I’ve got into something rational. I’m on the make it good part of the “make it work > make it good” cycle.





  • I am not the person to be asking, I am no docker expert. It’s is my understanding depends_on: defines starting order. Once a service is started, it’s started. If it has an internal check for “healthy” I believe watchtower will restart unhealthy containers.

    This is blind leading the blind though, I would check the documentation if using watchtower. We should both go read the “depends on” documents as we both use it.


  • It’s Watchtower that I had problems with because of what you described. Watchtower will drop your microservice, say a database, to update it and then not reset the things that are dependent on it. It can be great just not in the ham fisted way I used it. So instead I’m going to update the stack together, everything drops, updates, and comes back up in the correct order

    Uptime Kuma can alert you when a service goes down. I am constantly in my Homarr homepage that tells me if it can’t ping a service, then I go investigating.

    I get that it’s scary, and after my Watchtower trauma I was hesitant to go automatic too. But, I’m managing 5 machines now, and scaling by getting more so I have to think about scale.


  • I’ve encountered that before with Watchtower updating parts of a serrvice and breaking the whole stack. But automating a stack update, as opposed to a service update, should mitigate all of that. I’ll include a system prune in the script.

    Most of my stacks are stable so aside from breaking changes I should be fine. If I hit a breaking change, I keep backups, I’ll rebuild and update manually. I think that’ll be a net time save over all.

    I keep two docker lxcs, one for arrs and one for everything else. I might make a third lxc for things that currently require manual updates. Immich is my only one currently.



  • Release: stable

    Keep the updates as hands off as possible. Docker compose, TTeck’s LXC updater, automatic upgrades.

    I come through once a week or so to update the stacks (dockge > stack > update), I come through once a month or so to update the machines (I have 5 total). Total time updating is 3hrs a month. I could drop that time a lot when I get around to writing some scripts to update docker images, then I’d just have to “apt update && apt upgrade”

    Minimise attack surface and outsource security. I have nothing at all open to the internet, I use Tailscale to create tunnels. I’m trusting my security to Tailscale but they are much, much, better at it than I am.



  • I am sorry, I am but a worm just starting Docker and I have two questions.

    Say I set up pihole in a container. Then say I use Pihole’s web UI to change a setting, like setting the web UI to the midnight theme.

    Do changes persist when the container updates?

    I am under the impression that a container updating is the old one being deleted and a fresh install taking its place. So all the changes in settings vanish.

    I understand that I am supposed to write files to define parameters of the install. How am I supposed to know what to write to define the changes I want?

    Sorry to hijack, the question doesn’t seem big enough for its own post.