• 0 Posts
  • 70 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle






  • I actually agree. For the majority of sites and/or use cases, it probably is sufficient.

    Explaining properly why LE is generally problematic, takes considerable depth of information, that I’m just not able to relay easily right now. But consider this:

    LE is mostly a convenience. They save an operator $1 per month per certificate. For everyone with hosting costs beyond $1000, this is laughable savings. People who take TLS seriously often have more demands than “padlock in the browser UI”. If a free service decides they no longer want to use OCSP, that’s an annoying disruption that was entirely not worth the $1 https://www.abetterinternet.org/post/replacing-ocsp-with-crls/

    LE has no SLA. You have no guarantee to be able to ever renew your certificate again. A risk not anyone should take.

    Who is paying for LE? If you’re not paying, how can you rely on the service to exist tomorrow?

    It’s not too long ago that people said “only some sites need HTTPS, HTTP is fine for most”. It never was, and people should not build anything relevant on “free” security today either.



  • gencha@lemm.eetoSelfhosted@lemmy.worldPaid SSL vs Letsencrypt
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    16
    ·
    7 days ago

    People who have actually relevant use cases with the need for a reliable partner would never use LE. It’s a gimmick for hobbyists and people who suck at their job.

    If you have never revoked a certificate, you don’t really know what you’re doing. If you have never run into rate-limiting issues with LE that block a rollout, you don’t know what you’re doing.

    LE works until it doesn’t, and then it’s like every other free service on the internet: no guarantees If your setup relies on the goodwill of a single entity handing out shit for free, it’s not a robust setup. If you rely on that entity to keep an OCSP responder alive for free so all your consumers can verify the validity of your certificate, that’s not great. And people do this to save their company $1 a month for the real thing? Even running the shitty certbot in compute has a larger cost. People are so blindly in love with this “free” garbage. The fanboys will never die off





  • I’m calling bullshit on any user count they release. The site was filled with bots even when I still used it. People kept complaining about “karma farmers” as if there were users who repost popular content. It has always been largely Reddit’s own bots too keep new users entertained and recycle popular content so that it reaches as many users as possible. They turned this up to 11 before going public.

    Now that they no longer provide an API, they are free to make up any fake metric they want to try to pump up their worthless stock.


  • https://discord.com/terms#5 is pretty permissive

    Your content is yours, but you give us a license to it when you use Discord. Your content may be protected by certain intellectual property rights. We don’t own those. But by using our services, you grant us a license—which is a form of permission—to do the following with your content, in accordance with applicable legal requirements, in connection with operating, developing, and improving our services:

    Use, copy, store, distribute, and communicate your content in manners consistent with your use of the services. (For example, so we can store and display your content.)
    Publish, publicly perform, or publicly display your content if you’ve chosen to make it visible to others. (For example, so we can display your messages if you post them in certain servers or recommend that content to others.)
    Monitor, modify, translate, and reformat your content. (For example, so we can resize an image you post to fit on a mobile device.)
    Sublicense your content, to allow our services to work as intended. (For example, so we can store your content with our cloud service providers.)
    



  • I’d be more worried about media than the ability to pirate it.

    Music has adapted to generate plays. Platforms are already being polluted with genAI music.

    TV was replaced by streaming services. Series come and go and are very specifically tailored to get people to subscribe. Exclusives are the standard. Single season productions are not uncommon. People are also already investigating ways to pollute this pool with genAI as well.

    Movies are a stream of Marvel and Disney garbage that was already more CGI than acting. Now genAI and upscaled classics are on the menu.

    Piracy will not go away. People used to record movies with camcorders in the cinema, now they pull raw files from CDN nodes. There is always the scene. The platforms that try to profit from the scene come and go.



  • I don’t necessarily disagree, but I have spent considerable time on this subject and can see merit in decoupling your own error signaling from the HTTP layer.

    No matter how you design your API, if you’re passing through additional layers, like load balancers and CDNs, you no longer have full control over all responses your clients receive. At this point it may be viable to always signal a successful backend connection with a 200, even if the process resulted in a failure.

    Going further, your API may include partial success scenarios, think batch processing, then the result could be a mix of success and failure that doesn’t translate to HTTP status.

    You could even argue that there is really no reason to couple your API so tightly with a concept of the transport layer it uses.


  • Respect the Accept header from the client. If they need JSON, send JSON, otherwise don’t.

    Repeating an HTTP status code in the body is redundant and error prone. Never do it.

    Error codes are great. Ensure to prefix yours and keep them unique.

    Error messages can be helpful, but often lead developers to just display them in the frontend, breaking i18n. Some people supply error messages in multiple languages, depending on the Accept-Language header.