Looks like Three doesn’t block it…
Looks like Three doesn’t block it…
Statping-ng has had some updates beyond the base.
Snorkeling is probably your best choice as it did show latency overall and not just up/down.
Write your own selinux module with audit2allow.
I’m not at work so I can’t find the guides I use but this looks similar https://danwalsh.livejournal.com/24750.html
Myself over NFS can have serious latency issues. Some software can’t correctly file lock over NFS too which will cause write latency or just full blown errors.
iSCSI drops however can be really really bad and cause full filesystem corruption. Also backing up iSCSI volumes can be tricky. Software will likely work better and feel happy however and underlying issues may be masked until they’re unfixable. (I had an iSCSI volume attached to vmware silently corrupt for months before it failed and lost the data even though all scrubs/checksums were good until the very least moment).
You can make your situations with with either technology, both are just as correct. Would get a touch more throughput on iSCSI simply down to the write confirmation being drive based and not filesystem locks / os based.
YMMV
I’ve had issues with this too and reverted back to rooted docker. I even tried podman and system NFS mounts that it binds too with varying issues.
It looks like you can’t actually do this with podman for varying reasons.
Power line adaptors
Same… Although maybe I’m just searching niche problems.
Hadrian’s wall still stands… Although in most places it’s easy to cross 😂
DL380 G9. Those bioses don’t support booting from PCIe at all.
They actually do but it can only be a HPE supported BootROM… anything non-HPE is ignored (weirdly, some Intel and Broadcom cards PXE boot without the HPE firmware but not all).
Most of these boards have internal USB and internal SD slots which you can boot from with any media, intact HPE sell a USB SD card raid adaptor for the usb slot. So I would recommend using SD card for this…
Ports 80 and 443.
The cli is easy and you could just Cron (scheduled task) a bunch of commands to open the firewall, renew cert and close the firewall. It’s how I do it for some internal systems.
I’m not sure about anything you’re running but I would look into certbot.
Either using the basic web plugin or DNS plugin. Nginx would be simpler, you’d just have to open your web ports on certificate generation to pass the challenge.
I know some proxy tools have let’s encrypt support, such as traefik.
Technically it is, as someone else mentioned, text is copied on federation, this means is you as an admin need to actively moderate instances you federate with that may cause you issues in a legal standpoint whether correct or not. Facebook etc have rights that means you’re not liable for user content, you as an individual instance admin however would need to fight for those rights.
Sure it’s a rubbish thing they did but I also understand it completely.
SQLite doesn’t like NFS, the file locking isn’t stable/fast enough so any latency in the storage can cause data loss, corruption or just slow things down.
However SQLite to MySQL is relatively peanuts, Postgres less so…
Still it’s a nice move for those that don’t run containers on a single host with local filesystems.
Try shifting/cashing legitimate XMR is very hard now.
Federation more or less means the info is copied, so from a dcma standpoint the instance is still liable. If content is deleted from the main instance, it doesn’t always delet from a federated one.
This would de different if you could proxy instead of copy the data on federation.
This is the best answer.
Use the Lutris website to find the game and install script, that should bypass the login check and let you bring your own exe.
Checkout Zork, it runs Zcode games. There’s loads of games at IfArchive with some packs listed on intfiction.
Ceph works best if you have identical osd, quantity, type and capacity across the cluster, also works best on a 3+ node cluster.
I ran a mixed sata SSD/HDD 256gb/4tb cluster and it was always a bit pants. Now I have 7x1tb SSD per node (4nodes) and it works fantastic now.
Proxmox uses replica 3/2 failure at host level but you may find that EC works better for your mixed infra as you noticed you can’t meed the 3 host failure and so setting to osd failure level means data may be kept on a single host so would need to traverse the network to the other machine.
You may also need more than a single 10Gb nic too as you might start hitting bandwidth issues.
Importantly and how it’s different to FF is that it boots the content without calling the disk reset and if you keep the disk button wedged then that reset never triggers, so that copy protection isn’t called, where as FF basically triggers a drive reset which is why you couldn’t use that.