• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle






  • But that wasn’t my point. It’s not that I think that Facebook or Google cannot scrape Fediverse platforms/instances, it’s that even if they do, they cannot serve targeted ads based on our activity here.

    We have different definitions for privacy. Since I’m active here, it should be clear that to me private doesn’t mean hidden. I like how the EFF put it, in their article on the Fediverse:

    [T]he default with incumbent platforms is usually an all-or-nothing bargain where you accept a platform’s terms or delete your account. The privacy dashboards buried deep in the platform’s settings are a way to tinker in the margins, but even if you untick every box, the big commercial services still harvest vast amounts of your data. To rely on these major platforms is to lose critical autonomy over your privacy, your security, and your free expression.



  • It seems we have different priorities and concerns, and I can respect that.

    I’m skeptical of Facebook, as I see the potential of it attempting to take over the Fediverse. As I’ve said in a different comment recently, Facebook’s business model goes against the Fediverse’s business model. And, in the long term, the Fediverse model has the potential to compete with larger for-profit corporations. And, as it has done in the past with the acquisition of Instagram and WhatsApp, Facebook is now once again trying to prevent its demise by joining the Fediverse. Again, I’m not saying that the Fediverse is an existential threat to Facebook now, but it could be in the future. As people increasingly become weary of big corporations stealing their data, Facebook has to pretend that it’s changing. That it has learned from its past mistakes. And I just don’t buy it.

    We’re here because these large corporations have failed us.

    Yes, I wasn’t implying that Google or Facebook cannot see what we’re talking, when I mentioned the privacy concerns. I was referring to this data not being used to profile us for targeted ads.

    first it will be a black hole ripping through the Fediverse.

    Not if most instances choose not to federate with Facebook. People who want to be on a federated instance can sign up to that instance. The option to not federate is a build-in feature of the Fediverse, and I hope kbin.social takes advantage of that. If not, I’ll see myself out and look for an instance that does.

    Here’s an article that helped me understand this issue better: https://ianbetteridge.com/2023/06/21/meta-and-mastodon-whats-really-on-peoples-minds/.











  • I think it depends on what you mean by privacy. But I do recommend you read what the Electronic Frontier Foundation has to say about the fediverse. They released an extensive series about it last year: Leaving Twitter’s Walled Garden & The Fediverse Could Be Awesome (If We Don’t Screw It Up).

    Here’s the part that addresses your concern:

    Take privacy: the default with incumbent platforms is usually an all-or-nothing bargain where you accept a platform’s terms or delete your account. The privacy dashboards buried deep in the platform’s settings are a way to tinker in the margins, but even if you untick every box, the big commercial services still harvest vast amounts of your data. To rely on these major platforms is to lose critical autonomy over your privacy, your security, and your free expression.

    This handful of companies also share a single business model, based upon tracking us. Not only is this invasion of privacy creepy, but also the vast, frequently unnecessary amount and kinds of data being collected – your location, your friends and other contacts, your thoughts, and more – are often shared, leaked, and sold. They are also used to create inferences about you that can deeply impact your life.

    Even if you don’t mind having your data harvested, the mere act of collecting all of this sensitive information in one place makes for a toxic asset. A single bad lapse in security can compromise the privacy and safety of hundreds of millions of people. And once gathered, the information can be shared or demanded by law enforcement. Law enforcement access is even more worrisome in post-Dobbs America, where we already see criminal prosecutions based in part upon people’s social media activities.

    We’re also exhausted by social media’s often parasitic role in our lives. Many platforms are optimized to keep us scrolling and posting, and to prevent us from looking away. There’s precious little you can do to turn off these enticements and suggestions, despite the growing recognition that they can have a detrimental effect on our mental health and on our public discourse. Dis- and misinformation, harassment and bullying have thrived in this environment.

    There’s also the impossible task of global content moderation at scale. Content moderation fails on two fronts: first, users all over the world have seen that platforms fail to remove extremely harmful content, including disinformation and incitement that is forbidden by the platforms’ own policies. At the same time, platforms improperly remove numerous forms of vital expression, especially from those with little social power. To add insult to injury, users are given few options for appeal or restoration.

    These failures have triggered a mounting backlash. On both sides of the U.S. political spectrum, there’s been a flurry of ill-considered legislation aimed at regulating social media moderation practices. Outside of the U.S. we’ve seen multiple “online harms” proposals that are likely to make things worse for users, especially the most marginalized and vulnerable, and don’t meaningfully give everyday people more say over their online life. In some places, such as Turkey, bad legislation is already a reality.