• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle

  • I did (am doing) something very similar. I definitely have issues with my indexing, but I’m just ordering it manually by year/date for now.

    I’m doing a little extra for parity though. I’m using 50-100gb discs for the data, and using 25gb discs as a full parity disc via dvdisaster for each disc I burn. Hopefully that reduces the risk of the parity data also being unreadable, and gives MORE parity data without eating into my actual data discs. It’s hard enough to break up the archives into 100gb chunks as is.

    Need to look into bacula as suggested by another poster.




  • It’s not a transcoding power issue. It’s a UI consistency and usability issue. With every device having a slightly different UI, with some apps having issues if playing back natively and some needing transcoding, the experience is inconsistent and frankly doesn’t pass the “wife acceptance factor” test, or the “let your friends use it without needing to handhold them through regular troubleshooting for their particular device” test.

    I still don’t use Plex and exclusively use Jellyfin, but it’s still a hard sell to non technical users. Plex has much more polish.



  • I think the universal consensus is that outside of a very specific use case: multiple VDI desktops that share the same image, ZFS dedupe is completely useless at best and will destroy your dataset at worst by causing to be unmountable on any system that has less RAM than needed. In every other use case, the savings are not worth the trouble.

    Even in the VDI use case, unless you have MANY copies of said disk images(like 5+ copies of each), it’s still not worth the increase in system resources needed to use ZFS dedupe.

    It’s one of those “oooh shiny” nice features that everyone wants to use, but will regret it nearly every time.