Can’t argue with that flawless logic.
Can’t argue with that flawless logic.
I bought a used old gen Sonos Connect about a year ago to integrate my Logitech Z906 into an existing pair of Sonos speakers. They made it deliberately tedious to downgrade those speakers (who had gotten the S2 “blessing”) back to S1 to make them work with the Sonos Connect. I’m an IT repair shop guy and I cursed all the way through this downgrade process.
I would have gladly bought current hardware from them again if their prices were anywhere within the realm of plausibility. Credit where it’s due, that Sonos Connect hookup with the 2 wall-mounted 1st party speakers works absolutely reliably. That company just seriously lost its bearings since they engineered those parts.
I think if you’re talking wider demographics your model OSs are (obviously) Windows and macOS. People buy into that because CLI familiarity isn’t required. Especially with Apple products everything revolves around simplicity.
I do dream of a day when Linux can (at least somewhat) rival that. I love Linux because I am (or consider myself) intricately familiar with it and I can (theoretically) change every aspect about it. But mutability and limitless possibilities are not what makes an OS lovable to the average user. I think the advent of immutable Linux distros is a step in the right direction for mass adoption. Stuff just needs to work. Googling for StackOverflow or AskUbuntu postings shouldn’t ever be necessary when people just want to do whatever they were doing on Windows with limited technical knowledge.
However on another note, if you’re talking a home studio migration, not sure what that entails, but it sounds rather technical. I don’t want to be the guy to tell you that CLI familiarity is simply par for the course. Maybe your work shouldn’t require terminal interaction. Maybe there is a certain gap between absolutely basic linux tutorials and the more advanced ones like you suggest. Yet what I do want to say is that if you want to do repairwork on your own car it’s not exactly like that is supposed to be an accessible skill to acquire. Even if there are videos explaining step by step what you need to do, eventually you still need to get your own practice in. Stuff will break. We make mistakes and we learn from them. That is the point I’m trying to get at. Not all knowledge can be bestowed from without. Some of it just needs to grow organically from within.
I get that. SATA can be hot plug these days. I’m not saying it should rival the number of USB ports we get on motherboards, but I remember there were also these USB eSATA hybrid ports. Which would probably only work with USB 2.0 but still, would be nice to have.
Will definitely check to see if I can work OpenSuperClone into my workflows. Haven’t had failing drives drop out like that before so I can’t speak to that scenario. I imagine if it drops out why would that software have a harder time to recover under SATA?
Mark my words. Don’t ever use SATA to USB for anything other than (temporary) access to non critical preexisting data. I swear to god if I had a dollar for every time USB has screwed me over trying to simplify working with customers’ (and my own) drives. Whenever it comes to anything more advanced than data level access USB just doesn’t seem to offer the necessary utilities. Whether this is rooted in software, hardware or both I don’t know.
All I know is that you cannot realistically use USB to for example carbon copy one drive to another. It may end up working, it may throw errors letting you know that it failed, it may only seem to have worked in the end. It’s hard for me to imagine that with all the individual devices I’ve gone through that this is somehow down to the parts and that somewhere out there would be something better that actually makes this work. It really does feel like whoever came up with the controlling circuits used for USB to SATA conversion industry-wide just didn’t do a good enough job to implement everything in a way that makes it wholly transparent from the view of the operating system.
TL;DR If you want to use SATA as intended you need SATA all the way to the motherboard.
tbh I often ask myself why eSATA fell by the wayside. USB just isn’t up to these tasks in my experience.
Look. You can’t have it both ways. You can either be the “i use arch (and so should everybody else) btw” guy or you can be dumbfounded by people accusing you of being the “i use arch (and so should everybody else) btw” guy. If you do both (in succession I guess) you’re just a parody of your own pro-FOSS message.
I know I’m probably opening another can of worms by saying this but I’m an absolute privacy advocate. And guess what? I use multiple Windows-installations as part of my day-to-day. Yes I do want that number to migrate towards zero but so far, especially when it comes to laptops (and more so laptops with multiple GPUs) I just never saw any appeal in crippling my own experience just for the sake of subjective “freedom”.
So now imagine a person like me trying to look for help setting up a Pi-hole installation for the sake of privacy. In comes the evangelical “If you actually truly care about your privacy, why are you using Windows?” Sound familiar? How about helpful (in terms of getting someone closer to a Pi-hole installation)?
2500 miles sheesh. That shit’s nuclear war proof then.
Said like a person that doesn’t want to “argue till the end of the universe”. Maybe just take the hint once there’s multiple people trying to politely tell you the same thing? Prove that you’re not just good at fortifying the walls around your bubble. Criticism is rarely meant to attack us. Nobody is accusing you of a crime. I know it’s hard to take that step back from one’s own perspective.
Again, just because something works for you doesn’t mean you have to be evangelical about it. Don’t try to be the “I use arch btw” meme for real.
Once you face the (seemingly) inevitable necessity of further hardware purchases it does become sort of tedious I must say. I used to treat my raid parity as a “backup” for way longer than I’d like to admit because I didn’t want my costs to double. With unraid I at least don’t have the same management workload that I have on my main box where I have a rolling release Arch with manually installed ZFS where the build always has to line up with the kernel version and all that jazz. Unraid is my deploy and forget box. Rsync every 24h. God bless.
Proxmox has been recommended to me before I switched my main server to Arch but once I realised that it has no direct docker support I thought I’d rather just do things myself. It really is a matter of preference. It’s kind of hard to believe that all the functionality in Proxmox can be had for absolutely free.
don’t owe OP an answer
Exactly. Since its dawn forums on the internet have been full of people countering legitimate questions with “why would you even ask that?”. Not only is nobody owed your “contribution”, it is of zero value.
because something exists doesn’t mean it should be installed
Elitist much. Why would you rather assume that a tech-savvy person is asking for tech guidance than the infinitely more likely opposite case? The answer is because you (elitist) think what works for you is the only valid path and all must be guided to your subjective treasure. Your intentions may be benign but your methods are not.
It’s understandable that you want to take your virtualization-capabilities to the next level but I also don’t see the appeal of containerizing unraid like many others here. I started using unraid last autumn and to me it really is about being able to mix drive sizes. It’s a backup to my main server’s ZFS pool so (fingers crossed) I don’t even really worry about drive failures on unraid. (I have double parity on ZFS and single parity on unraid.)
Anyways my point is I started out with 8 SATA slots plus an old USB-based enclosure with i set to JBOD mode and that was a pretty stupid idea. unraid couldn’t read SMART data from those USB drives. Every once in a while one of the drives would suddenly show up as having an unsupported partition layout. Couple weeks ago all 5 drives in the enclosure started showing up as unusable. So as you can imagine I dropped that enclosure and now am working solely off the 8 internal slots. I’d imagine that virtualizing unraid’s disk access might potentially yield similar issues. At least the comments of people here remind me of my own janky setup.
Could you share that script? Sounds like a nifty grassroots tech solution.
It’s not an assumption that transitioning to (Proton on) Linux is hard with no prior knowledge. An assumption is that you’re probably talking from the perspective of a tech-savvy person that doesn’t need to open a Lemmy thread to find their desired software. OP doesn’t owe you a question that computes in your head. Open Source software for Windows exists therefore it can be installed.
tbh I’ve had almost exclusively hostile(-ish) exchanges on lemmy as well, but obviously going back to that morally bankrupt place isn’t gonna be the answer.
This. Also unless you have raw BluRay sources recompressing already compressed video isn’t exactly a great idea either way. The space savings will never be worth the loss in visual quality. If you were to retain the quality the space used would probably be similar even with a more efficient / newer codec.
I recently “upgraded” one of my raspberrys SD cards to an industrial grade one. Seems to me like those are a lot slower but for that particular use case it doesnt matter to me. What matters is that the card doesn’t die. It runs noticeably cooler when lots of data is being written to it so I feel like I must be onto something there.
I used to (over a span of about 4 years now) just rely on a RaidZ2 (ZFS) pool (faulted drive replacements never gave any issues) but I recently did an expansion of the array plus OS reinstall and only now am I starting to incorporate Docker containers into my workflows. The live data is in ~ and nightly rsynced onto the new larger RaidZ2 pool but there is also data on that pool which I’ve thus far never stored anywhere else.
So my answer to the question would be an off-site unraid install which is still in the works. This really will only be that. A catastophe insurance. I probably won’t even rely on parity drives there in order to maximize space since I already have double parity on ZFS.
As far as reinstallation goes, I don’t feel like restoring ~ and running docker compose for all the services again would be too much of a hassle.
My solution is basically what @mojolobo mentions with Nextcloud behind it and I love the concept. Because Obsidian (via a WebDAV plugin on the phone) just syncs with the “Notes” folder in my Nextcloud root it really is just a bunch of .md (markdown) files. It gives me an added sense of security (on top of the self-hosting aspect) because I can see those files everywhere I have Nextcloud installed, I can edit them manually if I wanted to. On the PC you just point the Obsidian app to the folder, on phones you do it via a WebDAV plugin.