Storage is an unsolved problem in 2024. NIMBY and some old storage facilities are failing, a problem for our kids to solve and pay for. I’m a nuclear proponent or I used to be one, but this ship has sailed for good.
Storage is an unsolved problem in 2024. NIMBY and some old storage facilities are failing, a problem for our kids to solve and pay for. I’m a nuclear proponent or I used to be one, but this ship has sailed for good.
Legend, thank you!
Since you mentioned MLC, maybe you have some suggestions for eg used server grade disks? Would the Rpi be able to run something like the Intel datacenter SSDs eg S3700? The power loss protection is really something I would like to have, especially in a homelab scenario. Or any other notable MLCs with larger capacities? I am having trouble finding a good list sorted by max potential storage.
Crucial is fine but IIRC the 1TB+ variants are too good to be true (cheap) and will die quite fast. Just a note for everyone to look into the underlying technology on the particular model.
Log2ram is a service which keeps your log files in RAM, avoids the constant writes to disk and really helps with SDcard longevity. Probably helps with SSDs too.
You can just Google it and check out the github page, no need for LLM accuracy lottery
I do not have to share passwords with 10-50 people and neither did the op imply this. I am having trouble figuring out the reasoning behind your message. Why would this be a normal use case?
I mean this comment well. You seem to be clueless about the problems open source projects are facing. Free work and hopefully the maintainer doesn’t burn out before he can hand the torch to another person.
Are you not aware of the countless issues with absolutely unsustainable open source projects out there in the wild?
We need a cultural change and a way to normalize supporting and paying (whoever can afford to) for good open source projects.
I am importing my externally synced and managed library to immich. It does not create any structure or edit the files.
Yes, a thousand times this. DeSEC is awesome, I moved my domain record management there. I’m usually buying domains on namecheap, and the IP allow list thing for the API was just too annoying to deal with.
This, letsencrypt with dns challenge, https://desec.io/ to manage the dns records https://github.com/go-acme/lego or traefik to manage the certificates and do the dns challenges for you.
Running ZFS on consumer SSDs is absolute no go, you need datacenter-rated ones for power loss protection. Price goes brrrrt €€€€€
I too had an idea for a ssd-only pool, but I scaled it back and only use it for VMs / DBs. Everything else is on spinning rust, 2 disks in mirror with regular snapshots and off-site backup.
Now if you don’t care about your data, you can just spin up whatever you want in a 120€ 2TB ssd. And then cry once it starts failing under average load.
Edit: having no power loss protection with ZFS has an enormous (negative) impact on performance and tanks your IOPS.
This, just pgdump properly and test the restore against a different container. Bonus points for spinning as new app instance and checking if it gets along with the restored db.
And remember, friends don’t let friends use latest. Pin the versions in your manifests and version control everything.
I know what you mean. Most people mean well, some are a bit too aggressive, but probably also mean well. I honestly sometimes roll my eyes when I start reading about tailscale, cloudflare tunnels etc. The main thing is not to expose anything you don’t absolutely need to expose.
For access from the outside the most you should need is a random high port forwarded for ssh into a dedicated host (can be a VM / container if you don’t have a spare RaspberryPi). And Wireguard on a host which updates the server package regularly. So probably not on your router, unless the vendor is on top of things.
Regarding ansible and documenting, I totally get your point. Ten years ago I was an absolute Linux noob and my flatmate had to set up an IRC bouncer on my RPi. It ran like that for a few years and I dared not touch anything. Then the SD card died and took down the bouncer, dynDNS and a few other things running on it.
It takes me a lot of time to write and test my ansible playbooks and custom roles, but every now and then I have to move services between hosts. And this is an absolute life saver. Whenever I’m really low on time and need to get something up and running, I write down things in a readme in my infra repository and occasionally I would go through my backlog when I have nothing better to do.
One word of advice. Document the steps you do to deploy things. If your hardware fails or you make a simple mistake, it will cost you weeks of work to recover. This is a bit extreme, but I take my time when setting things up and automate as good as possible using ansible. You don’t have to do this, but the ability to just scrap things and redeploy gives great peace of mind.
And right now you are reluctant to do this because it’s gonna cost you too much time. This should not be the case. I mean, just imagine things going wrong in a year or two and you can’t remember most things you know now. Document your setup and write a few scripts. It’s a good start.
Same. I buy all my domains there. And in case someone needs a proper API and support for the dns challenge, host your DNS at DeSEC.
You don’t need 8 drives when they are 8 times larger than your current ones. I went from planning for 5+ drives to just downsizing to two drives in mirror. Then I can expand with another mirror.
Unless you need uptime and want to guarantee an SLA for your own services, you are much better off with a mirror or raidz1. Do regular backups (off-site, incremental) and don’t fear the disk failure.
Same, I have a bunch of “inbox” folders and drop files into my server or desktop from my phone with 3 clicks.
Does it support the docker compose plugin / v2 API (the ‘docker compose’ plugin and not the old ‘docker-compose’ command)?