Imagine a world without platform lock-in, where no ban or billionaire could take down your social network. That’s what ActivityPub has planned.

  • shadesver@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I heard a sentiment that activitypub and fediverse protocols aren’t great for privacy? I’ll be honest this was from a reddit comment and was probably biased. I don’t know enough about this yet. But I want to learn more!

    Anyone have links?

    • TheImpressiveX@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      When you post stuff on federated instances, it gets copied to the numerous servers that are federating with it. If you delete something, it’ll only get deleted on one server. The other servers might also delete it, but there’s a chance that some servers might still keep the deleted posts. So I’d just say be considerate and think twice before posting, because it can’t be deleted as easily.

      https://joinfediverse.wiki/Best_practices#Data_Privacy

    • chamim@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I think it depends on what you mean by privacy. But I do recommend you read what the Electronic Frontier Foundation has to say about the fediverse. They released an extensive series about it last year: Leaving Twitter’s Walled Garden & The Fediverse Could Be Awesome (If We Don’t Screw It Up).

      Here’s the part that addresses your concern:

      Take privacy: the default with incumbent platforms is usually an all-or-nothing bargain where you accept a platform’s terms or delete your account. The privacy dashboards buried deep in the platform’s settings are a way to tinker in the margins, but even if you untick every box, the big commercial services still harvest vast amounts of your data. To rely on these major platforms is to lose critical autonomy over your privacy, your security, and your free expression.

      This handful of companies also share a single business model, based upon tracking us. Not only is this invasion of privacy creepy, but also the vast, frequently unnecessary amount and kinds of data being collected – your location, your friends and other contacts, your thoughts, and more – are often shared, leaked, and sold. They are also used to create inferences about you that can deeply impact your life.

      Even if you don’t mind having your data harvested, the mere act of collecting all of this sensitive information in one place makes for a toxic asset. A single bad lapse in security can compromise the privacy and safety of hundreds of millions of people. And once gathered, the information can be shared or demanded by law enforcement. Law enforcement access is even more worrisome in post-Dobbs America, where we already see criminal prosecutions based in part upon people’s social media activities.

      We’re also exhausted by social media’s often parasitic role in our lives. Many platforms are optimized to keep us scrolling and posting, and to prevent us from looking away. There’s precious little you can do to turn off these enticements and suggestions, despite the growing recognition that they can have a detrimental effect on our mental health and on our public discourse. Dis- and misinformation, harassment and bullying have thrived in this environment.

      There’s also the impossible task of global content moderation at scale. Content moderation fails on two fronts: first, users all over the world have seen that platforms fail to remove extremely harmful content, including disinformation and incitement that is forbidden by the platforms’ own policies. At the same time, platforms improperly remove numerous forms of vital expression, especially from those with little social power. To add insult to injury, users are given few options for appeal or restoration.

      These failures have triggered a mounting backlash. On both sides of the U.S. political spectrum, there’s been a flurry of ill-considered legislation aimed at regulating social media moderation practices. Outside of the U.S. we’ve seen multiple “online harms” proposals that are likely to make things worse for users, especially the most marginalized and vulnerable, and don’t meaningfully give everyday people more say over their online life. In some places, such as Turkey, bad legislation is already a reality.

    • Wander@yiffit.net
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      They’re not because you should assume that everything you say is public and can’t be deleted as a remote server may have a copy of what you said.

      The key here is to not use identifiable information. Use a pseudonym.

      • TGRush@forum.fail
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        man am I glad that my real name is so generic that you don’t find me unless you specifically search for keywords associated with what I do (e.g “Codeberg” or “Mastodon”)

        it’s truly great to have the same name as somebody famous’ brother.