• 0 Posts
  • 16 Comments
Joined 9 months ago
cake
Cake day: February 10th, 2024

help-circle

  • A ground-up overhaul of the copyright system would make things so much worse, not better, considering the current climate of power. In the US for example, MPA, RIAA, Entertainment Software Association, Association of American Publishers, and others wouldn’t want public libraries or the used market to exist at all; they would push for making every single transfer of “ownership” on any media involve a payment to the rights holder. Lawmakers are far more likely to accommodate those groups’ desires than the public good.

    The worst parts of the current copyright system are the most recent. Both the DMCA and the extension of US copyright term to 95 years took effect in 1998, and the early 2000s saw many other countries passing laws to make their copyright system closer to US’s in various ways such as the WIPO Copyright Treaty which took effect in 2002 and EU’s 2006 Copyright Directive. Just about the only positive news we’ve seen in US copyright law since then is in temporary exemptions to DMCA’s anti-circumvention rules (Section 1201) which change every year. Copyright law was far less hostile to consumers and the public before the 90s than it is now, and up until 1976 it used to be expected that most media someone consumes would enter public domain within their lifetime.

    The digital era makes market relevance far more ephemeral than ever and yet the laws written for the digital era moved copyright in the opposite direction. Movie studios simultaneously judge whether a film succeeded almost exclusively based on its first week of ticket sales and also claim that depriving public domain for 95 years is necessary. Nothing should be able to justify more than 20 years of copyright. Media formats don’t even last as long as copyright; CDs and DVDs rot, game cartridges die, servers shut down, and even books printed on today’s low-quality paper will fall apart.

    Some of it is absurd to me, like the way something can be online but geographically restricted.

    This is a consequence of contract terms moreso than copyright. One issue in copyright law that this does connect to, though, is the fact that the question of whether the rightsholder keeps a work reasonably available on the market does not impact whether the work retains copyright protections. If copyright law did hypothetically include that limitation, providers would become far more likely to make sure that all content is available in all countries, but even then things could still vary in terms of which content is on which platform.


  • That’s strange. As far as I can tell from any web searches, every version Windows still defaults to storing local time to the hardware clock and there are no reports of that changing with an update, nor is there any exposed setting control to configure this behavior outside of regedit. If you’re curious enough, you can check the current setting in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation. Windows maintains the current time as UTC if and only if the RealTimeIsUniversal key is present and nonzero.

    I expect it’s more likely some other issue would make the BIOS display an hour that’s inconsistent with your local timezone. For example, maybe a bug in the BIOS, maybe a timezone offset setting within the BIOS, or maybe a dead clock battery.


  • Unix time is far less universal in computing than you might hope. A few exceptions I’m aware of:

    • Most real-time clock hardware stores datetime as separate binary-coded decimal fields representing months, days, hours, minutes, and seconds as one byte each, and often the year too (resulting in a year 2100 limit).
    • Python’s datetime, WIN32’s SYSTEMTIME, Java’s LocalDateTime, and MySQL’s DATETIME similarly have separate attributes for year, month, day, etc.
    • NTFS stores a 64-bit number representing time elapsed since the year 1601 in 100-nanosecond resolution for things like file creation time.
    • NTP uses an epoch of midnight 1900-01-01 with unsigned seconds elapsed and an unusual base-2 fractional part
    • GPS uses an epoch of midnight 1980-01-06 with a week number and time within the week as separate values.

    Converting between time formats is a common source of bugs and each one will overflow in different ways. A time value might overflow in the year 2036, 2038, 2070, 2100, 2156, or 9999.

    Also, Unix time is often managed with a separate nanoseconds component for increased resolution. Like in C struct timespec, modern *nix filesystems like ext4/xfs/btrfs/zfs, etc.


  • as soon as the BIOS loaded and showed the time, it was “wrong” because it was in UTC

    Because you don’t use Windows. Windows by default stores local time, not UTC, to the RTC. This behavior can be overriden with a registry tweak. Some Linux distro installer disks (at least Ubuntu and Fedora, maybe others) will try to detect if your system has an existing Windows install and mimicks this behavior if one exists (equivalent to timedatectl set-local-rtc 1) and otherwise defaults to storing UTC, which is the more sane choice.

    Storing localtime on a computer that has more than one bootable OS becomes a particularly noticable problem in regions that observe DST, because each OS will try to change the RTC by one hour on its first boot after the time change.


  • Yes.

    My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

    I don’t think there’s a substantial performance impact but I’ve never bothered benchmarking.



  • I’m not sure if this is required. Any decent e-mail server uses TLS to communicate these days, so everything in transit is already encrypted.

    In transit, yes, but not end-to-end.

    One feature that Proton advertises: when you send an email from one Proton mail account to another Proton address, the message is automatically encrypted such that (assuming you trust their client-side code for webmail/bridge) Proton’s servers never have access to the message contents for even a moment. When incoming mail hits Proton’s SMTP server, Proton technically could (but claims not to) log the unencrypted message contents before encrypting it with the recipient’s public key and storing it. That undermines Proton’s promise of Proton not having access to your emails. If both parties involved in an email conversation agree to use PGP encryption then they could avoid that risk, and no mail server on either end would have access to anything more than metadata and the initial exchange of public keys, but most humans won’t bother doing that key exchange and almost no automated mailers would.

    Some standard way of automatically asking a mail server “Does user@proton.me have a PGP public key?” would help on this front as long as the server doesn’t reject senders who ignore this feature and send SMTP/TLS as normal without PGP. This still requires trusting that the server doesn’t give an incorrect public key but any suspicious behavior on this front would be very noticeable in a way that server-side logging would not be. Users who deem that unacceptable can still use a separate set of PGP keys.


  • They say the reason for needing their bridge is the encryption at rest, but I feel like the better way to handle wanting to push email privacy forward would be to publish (or better yet coordinate with other groups on drafting) a public standard that both clients and competing email servers could adopt for an email syncing protocol for that sort of zero-access encryption that requires users give their client a key file. A bridge would be easier to swallow as a fallback option until there’s wider client support rather than as the only way.

    A similar standard for server-to-server communication, like for automatic pgp key negotiation, would be nice too.

    Still, Proton has a easy to access data export that doesn’t require a bridge client or subscription or anything. I think that’s required by GDPR. It’s manual enough to not be an effective way to keep up-to-date backups in case you ever abruptly lose access but it’s good enough to handle wanting to migrate to another provider.


  • Compared to simplelogin (or proton pass aliases, addy, firefox relay, etc), one other downside of a catchall is in associations across accounts. Registering with a @passmail.net address implies that I use Proton; registering with random-string@mydomain.org implies I have access to that domain. If 10 data breach leaks have exactly one account matching the latter pattern then that’s a strong sign the domain isn’t shared. If one breached site has my mailing address, my real identity can be tied to all the others.


  • Something I’ve noticed that is somewhat related but tangential to your problem: The result I’ve always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don’t use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

    volumes:
      nextcloud:
      db:
    
    services:
      db:
        image: docker.io/mariadb:10.6
        ...
      app:
        image: docker.io/nextcloud
        ...
    

    I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory’s name otherwise.

    The reasons I adjust my own compose files to be different from the image maintainer’s recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I’ve had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.




  • Debian. I was in a similar boat to OP and just a couple weeks ago migrated my almost 8-year-old home server setup from Ubuntu LTS to Debian Stable. Decided to finally move away from Ubuntu because I never cared for snap (had to keep removing it with every upgrade) and gradually gained a few smaller issues with Ubuntu. Seems good to me so far.

    I considered RHEL/Rocky but decided against them largely because I wanted btrfs for my rootfs, which their stock kernel doesn’t have, though I use a few Red Hat developed tools like podman and cockpit. Fedora Server and the like have too fast a release lifecycle for my liking, though I use Fedora for my desktop. That left Debian as the one remaining obvious choice.

    I also briefly considered throwing a Debian VM into TrueNAS Scale, since I also use this system as a ZFS NAS, but setting that up felt like I was fighting against the “appliance” nature of what TrueNAS tries to be.


  • If you’re assuming “as long as the hardware will function” in the first place: even digital copies, DLC, and updates installed on the system before the servers shutdown will continue working even without hacks. There’s no check-in requirement except for the subscription-locked things like SNES games.

    However, the result of a nonrepairable hardware failure when you have no hacks nor official servers is rather bad no matter how your games are obtained: OFW does not allow you to transfer save data from one system to another without going through Nintendo servers and a vast majority of cartridge games are incomplete without updates or DLC.