

Yup, there are few efficient ways to handle that, so anything that does it looks something like everything else that handles it.
Sadly, not many things handle it :)
WYGIWYG
Yup, there are few efficient ways to handle that, so anything that does it looks something like everything else that handles it.
Sadly, not many things handle it :)
Plex does this on its own. It’s one of the features they provide. The client/service knows when the server is local even though you go outside to make the initial connection. They go through a lot of trouble to do this. You connect externally it brokers the initial connection proxies date of back and forth to see if you can talk to each other directly, your client knows your server is now local and it switches over.
I don’t know if any other video hosting package that does this. Jellyfin certainly would not. I ‘think’ if you threw a tailscale in the middle, It would be able to do it without hair pinning as long as you were using a local exit node instead of tailnet. They’d still probably go through that local exit node.
As people have said, the Intel CPU with quick sync will be much better on power.
You could also use your m.2 to caache your regular hard drive with BTRFS and LVM or something like https://bcache.evilpiepirate.org/
Maybe spin down your HDD when it’s not being used. Most of your power savings are going to come from not transcoding unless you need to, transcoding efficiently when you need to, and powering things down when you don’t need them.
In Linux you can mess with your clock regulation, probably even put the box to sleep when you don’t need it, maybe wake on lan.
deleted by creator
You don’t really want to live transcode 4K. That’s a tremendous amount of horsepower required to go real time. When you rip your movies you want to make sure they’re in some format that whatever player you’re using can handle. If that means that you use a streaming stick in your TV instead of the app on your TV that’s what you do. I think you could technically do it with a 10th+ gen Intel with embedded video. I know that a Nvidia 2070 super on a 7th gen Intel will not get the job done for an upper and Roku. So all of my 4K video is either h264 or HEVC so it all direct plays on my flavor of Roku.
Fork their project :) CLI clients are easy enough to use on PC Mac and Linux, All we need is for someone to build Android and iOS clients.
They have enough open source code out there to make the CLI clients and server.
It could be forked right now and turned into a separate project. BSD3 license. Rerelease with modifications. A couple of multi-platform devs could manage it.
DAS is 1:1, It’s more or less like just connecting en external hard drive to your computer.
SAN can do some crazier stuff. You can take arrays and attach them to LUN’s and to sign luns to separate computers. You have fiber optic routing and virtual networks, sometimes iSCSI. But that stuff is extremely expensive and power hungry and did I mention extremely expensive
NAS is basically just a computer with disks attached to it sharing the data through one of her protocols you need.
For home gaming, even sharing with a extended family, truenas, unraid, or just a computer with ZFS is ideal.
ZFS is the elite but slightly harder way to do it. Your volumes all need to be the same size even if your disks are different sizes. There’s regular maintenance that needs to be applied, But it’s very fast and very flexible and very easy to expand.
Unraid is very slow but very flexible, the discs aren’t in a raid they’re in a JBOD, so really really slow, But if you lose one disc all you’ve lost is the data on that disk, and you can run up to two parity discs. As long as your parity drives are larger than your largest data drive.
Truenas is more of an unraid type situation but with a ZFS. Both unraid and truenas support virtualization and/or containers for running applications and give you nice metrics and meters and stuff.
You can hand roll with Debian, ZFS, docker and proxmox.
I think DAS is pretty much dead. If you have a ton of ephemeral data, and you need to do high speed work on it It’s a reasonable solution. But I think for the most part eight terabyte nvme has made it pretty niche.
Porque no los dos?
There is no functional difference between them scraping you systematically and them coming to you on behalf of user. They’re coming to scrape you either way, being asked by someone is just going to make them do it in a smarter fashion.
Also, if you’re not using Gemini, damned if Google.com doesn’t search you with it anyway. They want these AIs trained bad, sooner or later almost all searching will be done through AI. There will eventually be no option.
You are correct that blocking all AI calls well eventually make your search results not work.
So if you want organic traffic, you have to allow ai scraping eventually. You’re just going to get diminishing returns until a point.
Oh, Plex has the risk. A vulnerability in Plex is how LastPass lost all their source code. A vulnerability in Tautulli which he had ported outside surfaced his auth token, then he was able to use the auth token to get into Plex and they were able to hit an rce vulnerability and pull the entire git repo the guy had locally.
The key difference is Plex at least has a security team and their name on the line with their investors.
but, think of it… RACING STRIPES!!! or FLAMES!!!
You use bamboo skewers to mount the things off the bottom and dampen vibration. mabey use an internal flap and bent the disks out the front and the PSU out the back. If you have enough cardboard, you could even bend it a bit and do like a jet engine with the fan sticking out the front.
cardboard papercraft homelab… I almost want to get rid of my 42 U rand and make a voltron now.
Just needs a 10" cardboard box with proper holes
A lot of neophyte self hosters Will try running the binary in Windows instead. Experienced self hosters will indeed use docker.
Then out of the ones that are using docker some of them will set it up as privileged.
And then how many of those people actually make read-only versus how many just add the path and don’t think about it.
Don’t confuse your good practices with what the average person will do.
I’ve heard jellyfin has a lot of security issues
The biggest known stuff I saw on their GitHub is that a number of the exposed service URLs under the hood don’t require auth. So, it’s open-source with known requirements, you can tell easily from the outside that it’s running, and you can cause it to activate a LOT of packages without logging in. That’s a zero-day in any package that can be passed a payload away from disaster.
AS far as TVOS, I’m kinda surprised swiftfin doesn’t service you.
Location sensor would be a good minimum bar.
A custom card for your app that is just basically a iframe into your app with auth would also be pretty decent. Your version of a map looks really nice.
Maybe surfacing metrics of distance traveled or number of geolocations.
I’ll have to install the app and play around with it to make other recommendations but those are the first things that come to mind.
AWS has an r4.8xlarge 244gb ram with 32 vcores for $2.13 an hour If they can handle Linux. $2.81 an hour for windows.
Choosing the right hardware is complicated. If you are transcoding 4K video on jellyfin you probably want a Nvidia 1080 or higher video card.
If you’re running Intel, 10th gen and higher with internal graphics has some pretty good encoding efficiency so you consume less power for a lot more work done.
I’m still rocking a 7th gen i7 with a 2070 super. It still gets the job done for me.
Been in it since the web was a thing. I agree wholeheartedly. If people don’t run auto updates and newbies will not run manual updates, You’re just teaching them how to make vulnerabilities.
Let them learn how to fix an automatic update failure rather than how to recover from ransomware. No contest here.
Not OP, and I don’t particularly hate PHP but I certainly understand why everyone else does. It had a ton of horrible issues that didn’t get fixed until 8. Just really awful stuff like a23+n7=30 , inconsistent syntax, It’s just had a lot of holes over the years. Post perl, It had the next greatest number of plugins and was reasonably rapid so it took off with the inexperienced crowd, But we ended up with a lot of code written by a lot of inexperienced people and a lot of best practices were eschewed. Most of the big software names that run PHP have had a constant stream of really bad vulnerabilities, more so than a lot of other languages. (WordPress, PHPBB, vbulletin, a million horribly written WordPress plugins)
Personally, in a pinch I’ll still do something in PHP. It’s so incredibly rapid and gives you marginally decent debug right out of the gate with nothing installed.
I did Linux on the desktop for 15 years. I was primarily Windows at home, Linux at work. With a job change, I took a detour through Mac for a couple of years, then WSL hit, and I ran Windows for quite a while.
I dropped back in, but only at home when Bookworm landed. I was playing Steam games with video acceleration right out of the gate. For a lot of people, it’s just going to work right out of the gate, and updates are just going to work. Now that a lot of shit’s going Electron, a lot of apps that had an edge in windows are now identical through their web interfaces.
If you’re not playing games with a lot of anti-cheat, using proprietary hardware or don’t need access to some windows-only apps (or you can put up with Wine), all the distros are up to the point where they operate just as you’d expect them to.