• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.

    Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.

    Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.


  • On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

    The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

    ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.









  • Perhaps they are bad examples, but my point was more that I think those ecosystems thrive in spite of the company that owns the upstream at this point more than because of it. They did tremendously useful work getting the projects off the ground but it ostensibly seems like they get in the way more often than not; that said, I haven’t done any open source work on either of the two. I’d be interested to hear your take, I could be pretty far off the mark.

    Honestly my main examples I’d point to right now are situations like manifest V3 and Android nitpicks like the recent Bluetooth 2-tap change; don’t get me wrong, they are easy to fork and have thriving ecosystems in terms of volunteer dedication, but those forks still primarily targeted towards technical users (with some exceptions) and companies selling devices like the Freedom Phone (and other, actually neat, useful, properly privacy focused devices which is awesome!). By far, however, most users are on the upstream branch due to “default choice” psychology and have to deal with the bullshit that’s increasingly integrated into the proprietary elements that Google seems to be making harder and harder to separate from the open source ones. I suppose that’s why education and getting the word out are all the more important though.

    Could be the sensationalist end of the tech news cycle getting me spun up on an overall inaccurate view of things.

    There is also the point I have to raise that security update support is always a very valuable asset that can be worth dealing with some downsides to get ahold of. I’m hoping a lot of those can be pulled into open source projects on more of a piecemeal basis where applicable?

    I’d be happy to be proven wrong about my rudimentary assessment. I have enough things to be doomer about and honestly it would be nice to have one or two fewer!


  • Chromium is still open source, as is Android to some extent. I get that the two companies (Google and Proton) are in completely different size classes, but something being open source doesn’t necessarily mean it stays healthy. Sure people can fork it, but the issue tends to lie in continuous maintenance by volunteers against continuous maintenance by a large company that’s constantly adding in anti-features along with desired ones.

    I’m not necessarily saying Proton will go down that route, but trying to become big and bundled as a value proposition opens the door for that behavior once they get enough people locked into the ecosystem.