

I’m glad I’ve already pulled my audible library in to audibookshelf, I didn’t have many ebooks so didn’t bother with them. I’m moving to librofm this month I think.
By DMing me you consent for them to be shared with whomever I wish, whenever I wish, unless you specify otherwise
I’m glad I’ve already pulled my audible library in to audibookshelf, I didn’t have many ebooks so didn’t bother with them. I’m moving to librofm this month I think.
Update went fine on a bare metal install. Customising the webUI port is a little easier now, instead of editing lighttdp.conf I think you can do it in the UI.
I struggled to find some settings, I looked for ages for the API token. Found it in all settings: expert, scroll for half a mile down the webUI API section.
Also, struggled with adding CNAMES in bulk, I thought you could do that in the old UI. You might be able to in the new UI. I just 'one by one’d them.
Docker update went flawlessly.
I have an lxc and to go which is a task for another day, unless TTeck’s updater beats me to it.
+1 for running pihole in an LXC, and a redundant pihole in a docker container.
They never update at the same time, or in the same way so near as dammit constant uptime.
My main storage is a mirrored pair of HDD. Versioning is handled here.
It Syncthings an “important” folder to a local back up only 1 HDD.
The local Backup Syncthings to my parents house with 1 SSD.
My setup can be better, if I put the versioning on my local backup it’d free space on my main storage. I could migrate to a dedicated backup software, Borg maybe, over syncthing. But Syncthing I knew and understood when I was slapdashing this together. It’s a problem for future me.
I’ve been seriously considering an Elitedesk G4 or Dell/Lenovo equivalent as back up machines. Mirrored drives. Enough oomph to HA the things using the “important” files: immich paperless etc.
My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I’m using a combination of making their pi be their DHCP and one user is running on avahi.
Chrome, the people’s browser of choice, really, really hates http so I’m putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn’t matter. But if people are going to be using it then I’ll have to get a more memorable one.
System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I’ll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.
Locally works as intended though, so that’s nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I’ve got into something rational. I’m on the make it good part of the “make it work > make it good” cycle.
In that case. Homarr is awesome, no complaints.
I probably won’t retroact this, my family aren’t going to explore and it was more to keep them on their specific homepage and stop them getting lost. New users will be locked to their specific page, I don’t expect they’ll ever go exploring to find out.
+1 for Homarr. I didn’t need to learn how to write any configs. Everything can be setup in realtime, in the GUI, and is immediately testable. Homarr brought a homepage down to my skill level.
My only wish is to lock homepages behind user permissions but it’s fine, my family friends don’t intend to explore, just to get to where they’re going.
That was my conclusion as well, however I am at work and it’s not appropriate to be reading docker documentation. Thank you for the write up.
I am not the person to be asking, I am no docker expert. It’s is my understanding depends_on: defines starting order. Once a service is started, it’s started. If it has an internal check for “healthy” I believe watchtower will restart unhealthy containers.
This is blind leading the blind though, I would check the documentation if using watchtower. We should both go read the “depends on” documents as we both use it.
It’s Watchtower that I had problems with because of what you described. Watchtower will drop your microservice, say a database, to update it and then not reset the things that are dependent on it. It can be great just not in the ham fisted way I used it. So instead I’m going to update the stack together, everything drops, updates, and comes back up in the correct order
Uptime Kuma can alert you when a service goes down. I am constantly in my Homarr homepage that tells me if it can’t ping a service, then I go investigating.
I get that it’s scary, and after my Watchtower trauma I was hesitant to go automatic too. But, I’m managing 5 machines now, and scaling by getting more so I have to think about scale.
I’ve encountered that before with Watchtower updating parts of a serrvice and breaking the whole stack. But automating a stack update, as opposed to a service update, should mitigate all of that. I’ll include a system prune in the script.
Most of my stacks are stable so aside from breaking changes I should be fine. If I hit a breaking change, I keep backups, I’ll rebuild and update manually. I think that’ll be a net time save over all.
I keep two docker lxcs, one for arrs and one for everything else. I might make a third lxc for things that currently require manual updates. Immich is my only one currently.
deleted by creator
Release: stable
Keep the updates as hands off as possible. Docker compose, TTeck’s LXC updater, automatic upgrades.
I come through once a week or so to update the stacks (dockge > stack > update), I come through once a month or so to update the machines (I have 5 total). Total time updating is 3hrs a month. I could drop that time a lot when I get around to writing some scripts to update docker images, then I’d just have to “apt update && apt upgrade”
Minimise attack surface and outsource security. I have nothing at all open to the internet, I use Tailscale to create tunnels. I’m trusting my security to Tailscale but they are much, much, better at it than I am.
Thank you.
I am sorry, I am but a worm just starting Docker and I have two questions.
Say I set up pihole in a container. Then say I use Pihole’s web UI to change a setting, like setting the web UI to the midnight theme.
Do changes persist when the container updates?
I am under the impression that a container updating is the old one being deleted and a fresh install taking its place. So all the changes in settings vanish.
I understand that I am supposed to write files to define parameters of the install. How am I supposed to know what to write to define the changes I want?
Sorry to hijack, the question doesn’t seem big enough for its own post.
Seeding to ratios is self correcting, in my inexperienced opinion as I only share ISOs.
Unpopular thing sits on someone’s computer (not mine) for ages just happily waiting until it’s useful. Popular thing is in and out. Purely for files intended to be churned; try a distro (in facebook’s case a book), use it, and delete it.
1:3 could be said to be a minimum (1 for to pay back, 1 to pay forward, and 1 to pay for a leecher)
Things that are going to be archived can be set as limitless as long as strain on hardware can be tolerated.