Ah I see, if you want to do AI then definitely stick with the 3070, I just assumed you’d be using it for video transcoding with something like Jellyfin.
Ah I see, if you want to do AI then definitely stick with the 3070, I just assumed you’d be using it for video transcoding with something like Jellyfin.
Looks good to me, although I would maybe even sell the 3070 and go for something like an intel arc and more ram instead.
Welp, guess I should do my research next time. Thanks for the heads up.
Depends on the file system, I know for a fact that ZFS supports ssd caches (in the form of l2arc and slog) and I believe that lvm does something similar (although I’ve never used it).
As for the size, it really depends how big the downloads are if you’re not downloading the biggest 4k movies in existence then you should be fine with something reasonably small like a 250 or 500gb ssd (although I’d always recommend higher because of durability and speed)
So that they can attack Taiwan to “secure local allies” would be my incredibly uneducated guess.
Great guide, especially the folder structure setup, I wasted so much storage getting that wrong at first.
But im wondering why not put everything into one compose file? It might just be personal preference but I find it a little easy to find what I’m looking for.
(Also this is just a nit-pick but including the version tag now prompts a warning that it’s deprecated.)
Just out of curiosity, what do you use all that storage for?
Username checks out
Honestly same, I just know that the speeds were better when the port forwarding worked.
I think it just makes the service visible to devices from outside the network which helps them form a more direct connection. (That’s just an educated guess though)
I can confirm that seeding with mullvad is painfully slow, if you do torrent locally get a VPN with port forwarding.
Using the *arrs is pretty convenient if you know how to use docker (or even if you dont) and then you can connect them to Plex or jellyfin to view, it won’t be instant like Netflix and co but at least its free/cheaper (cost of VPN or seedbox). You can even setup overseerr or jellyseerr to simplify the movie/show requests.
Radar can track whole movie collections if that’s what you mean, alternatively there is a list function which might do what you need(I’ve never really used it though so idk)
Yeah sure, here’s my setup including my transmission client. I essentially just give the docker containers access to the whole Torrent directory, instead of having one mount for the downloads and one for the media library. You also need to make sure that the arrs are set to hardlink which should be the default
The hard linking only works of the source and destination are in the same mount, for example
/data/downloads:/downloads /data/media:/media
Will create copies and use double the storage on just hard linking, to make it hardlink you need to put the downloads and destination folders in the same directory so make the docker mount look like
/data:/data
instead. Then you just need to tell your torrent client to put the downloaded files into /data/downloads/(either sonarr or radarr) and the the arrs can look into their folders and then hardlink the files into /data/media/whatever
I have no clue if any of this is understandable, but I can post my docker compose once I get to my pc
I’ve used flood for transmission for years now and it works reasonably good on mobile. I know for sure that you can upload .torrent files to it anyways, although the label system has been weird in the past so I’m not sure about that
Having been in this same position I think I can help, you are almost definitely being cgnat which means that you do not have your own ipv4. The two workarounds I used for this are to use only ipv6 which is public but means you can’t always access it from older networks. And the second solution is to wireguard tunnel to a free oracle VM and use it as a proxy.