

Pretty sure the country of origin for Apple products isn’t the US anyway. They’d be coming from China, India, etc. Reciprocal tariffs on the US should have no effect on Apple products sold in other countries.
Pretty sure the country of origin for Apple products isn’t the US anyway. They’d be coming from China, India, etc. Reciprocal tariffs on the US should have no effect on Apple products sold in other countries.
I mean, you’re not wrong, but the same applies to all phone manufacturers. Samsung, Pixel, etc. are going to see similar price hikes due to tariffs in the US, and a similar drop in demand in China as the population there moves to Chinese manufacturers. I’m not sure why you’re singling out Apple.
Main reason is that if you don’t already have the right key, VPN doesn’t even respond, it’s just a black hole where all packets get dropped. SSH on the other hand will respond whether or not you have a password or a key, which lets the attacker know that there’s something there listening.
That’s not to say SSH is insecure, I think it’s fine to expose once you take some basic steps to lock it down, just answering the question.
Some people move the port to a nonstandard one, but that only helps with automated scanners not determined attackers.
While true, cleaning up your logs such that you can actually see a determined attacker rather than it just getting buried in the noise is still worthwhile.
Reverse proxy + DNS-challenge wildcard cert for your domain. The end. Super easy to set up and zero maintenance. Adding a new service is just a couple clicks in your reverse proxy and you’re done.
Yes at a cursory glance that’s true. AI generated images don’t involve the abuse of children, that’s great. The problem is what the follow-on effects of this is. What’s to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it’s AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn’t if AI-generated CP is legal?
It wouldn’t matter. The public doesn’t listen directly to politicians, it gets filtered through the media first, and the media picks and chooses which parts they actually report. The people who would actually hear this already know. The people who would need to hear it never will because Fox won’t show it to them.
I don’t understand why everything isn’t just rated in Wh or mWh. It gives them a bigger number to advertise and it’s voltage-independent. Sure there are load-dependent conversion efficiencies that complicate things a bit, but nobody is going to get up in arms about a 5% deviation from the advertised spec due to less than ideal conversion efficiency. Compared to trying to figure out how many recharge cycles I’ll get on my 5000mAh laptop battery from my 20000mAh power bank (what voltage is that laptop battery running at again?) a 5% efficiency drop is a big nothing burger.
Yes, and Bitwarden+SimpleLogin. Bitwarden to keep track of login info including the alias that is used for that site. SimpleLogin is where the aliasing is actually handled, they have a decent UI for enabling/disabling or generating reverse aliases (for outgoing emails) when needed.
It does take a little more effort to manage it, but it’s worth the payoff. I’ve been using this setup for about 9 months now and I finally got my first spam email a week ago. I looked at the address it was sent to, it was an alias I used at a site I ordered something from about 6 months ago. I sent them a message letting them know that either someone at their company is selling customer info to scammers or their database has been leaked, then I shut off the alias. No more spam.
it gives people the option to use an alternate app store if they want but it doesn’t force anyone to.
That argument sounds great in theory, but would break down after a month or less, when companies start moving their apps off of Apple’s App Store and onto a 3rd party store that allows all the spyware Apple has forced them to remove if they want to have an iOS market. This move DOES force people to use alternate app stores when companies start moving (not copying, moving) their apps over to said stores to take advantage of the drop in oversight.
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
It also means you can go hog wild with docker system prune -af --volumes
and there’s no risk of losing any of your data.
I would separate the media and the Jellyfin image into different pools. Media would be a normal ZFS pool full of media files that gets mounted into any VM that needs it, like Jellyfin, sonarr, radarr, qbittorrent, etc. (preferably read-only mounted in Jellyfin if you’re going to expose Jellyfin to the internet).
As far as networking, from what I could see the only real change casaos was doing was mapping its dashboard to port 80, but not much more. Is there anything more I should be aware in general?
It depends on how you have things set up. If you’re just doing normal docker compose networking with port forwards then there shouldn’t be much to change, but if you’re doing anything more advanced like macvlan then you might have to set up taps on the host to be able to communicate with the container (not sure if CasaOS handles that automatically).
The nice thing about docker is all you need to do is backup your compose file, .env file, and mapped volumes, and you can easily restore on any other system. I don’t know much about CasaOS, but presumably you have the ability to stop your containers and access the filesystem to copy their config and mapped volumes elsewhere? If so this should be pretty easy. You might have some networking stuff to work out, but I suspect the rest should go smoothly and IMO would be a good move.
When self-hosting, the more you know about how things actually work, the easier it is to fix when something is acting up, and the easier it is to make known good backups and restore them.
While true, and I have a lot of DRM-free music that I’ve bought from Apple, the difference is that getting music purchased from Apple onto your computer in a usable format is a bit of a pain, and it’s all lossy. Music from Qobuz can be downloaded directly from their site after purchasing, in lossless FLAC format, and many of their albums are available in high-res 24-bit and/or 96 kHz format as well.
Would you mind if I added this as a discussion (crediting you and this post!) in the github project?
Yeah that would be fine
But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.
Sure, it’s a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I’m not sure what all you could add to improve it. The custom plugin I’m using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:
Basically there are 5 steps to the setup:
{
"metrics-addr": "127.0.0.1:9323"
}
Once running, you should be able to run curl http://localhost:9323/metrics
and see a dump of Prometheus metrics
send_notification() {
Updates=("$@")
UpdToString=$(printf ", %s" "${Updates[@]}")
UpdToString=${UpdToString:2}
File=updatelist_local.txt
echo -n $UpdToString > $File
}
#!/bin/bash
cd $(dirname $0)
./dockcheck/dockcheck.sh -mni
if [[ -f updatelist_local.txt ]]; then
mv updatelist_local.txt updatelist.txt
else
echo -n "None" > updatelist.txt
fi
At this point you should be able to run your script, and at the end you’ll have the file “updatelist.txt” which will either contain a comma-separated list of all containers with available updates, or “None” if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.
#!/usr/bin/python3
from flask import Flask, jsonify
import os
import time
import requests
import json
app = Flask(__name__)
# Listen addresses for docker metrics
dockerurls = ['http://127.0.0.1:9323/metrics']
# Other dockerstats servers
staturls = []
# File containing list of pending updates
updatefile = '/path/to/updatelist.txt'
@app.route('/metrics', methods=['GET'])
def get_tasks():
running = 0
stopped = 0
updates = ""
for url in dockerurls:
response = requests.get(url)
if (response.status_code == 200):
for line in response.text.split("\n"):
if 'engine_daemon_container_states_containers{state="running"}' in line:
running += int(line.split()[1])
if 'engine_daemon_container_states_containers{state="paused"}' in line:
stopped += int(line.split()[1])
if 'engine_daemon_container_states_containers{state="stopped"}' in line:
stopped += int(line.split()[1])
for url in staturls:
response = requests.get(url)
if (response.status_code == 200):
apidata = response.json()
running += int(apidata['results']['running'])
stopped += int(apidata['results']['stopped'])
if (apidata['results']['updates'] != "None"):
updates += ", " + apidata['results']['updates']
if (os.path.isfile(updatefile)):
st = os.stat(updatefile)
age = (time.time() - st.st_mtime)
if (age < 86400):
f = open(updatefile, "r")
temp = f.readline()
if (temp != "None"):
updates += ", " + temp
else:
updates += ", Error"
else:
updates += ", Error"
if not updates:
updates = "None"
else:
updates = updates[2:]
status = {
'running': running,
'stopped': stopped,
'updates': updates
}
return jsonify({'results': status})
if __name__ == '__main__':
app.run(host='0.0.0.0')
The neat thing about this program is it’s nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the “master” and update the “staturls” variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics
and see the running and stopped container counts and available updates for the current machine and any other machines you’ve added into “staturls”. You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it’s more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print “Error” to let you know something is wrong.
widget:
type: customapi
url: http://localhost:5000/metrics
refreshInterval: 2000
display: list
mappings:
- field:
results: running
label: Running
format: number
- field:
results: stopped
label: Stopped
format: number
- field:
results: updates
label: Updates
Lots of ways to get around that without having to go the route of burning a hundred blu-rays with complicated (and risky) archive splitting and merging. Just a handful of external HDDs that you “zfs send” to and cycle on some regular schedule would handle that. So buy 3 drives, backup your data to all 3 of them, then unplug 2 and put them somewhere safe (desk at work, friend or family member’s house, etc.). Continue backing up to the one you keep local for the next ~month and then rotate the drives. So at any given time you have a on-site copy that’s up-to-date, and two off-site copies that are no more than 1 and 2 months old respectively. Immune to ransomware, accidental deletion, fire, flood, etc. and super easy to maintain and restore from.