Patching a newer version of the Youtube app resolved the issues with playback I was having.
Someone interested in many things.
Patching a newer version of the Youtube app resolved the issues with playback I was having.
I think it’s a very specific case that needs to be taken in a very narrow context; it’s essentially an innocent mistake that needs to be recognized as such. The moment you step outside of that, I see no reasonable arguments for decriminalizing anything.
I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.
I don’t really think it’s something people should do, but I can honestly see it happening to ordinary people if they aren’t thinking about what they’re doing.
Picking and choosing isn’t the game I want to play, I’m just highlighting that there are circumstances that can result in actually innocent people doing things without thinking. Pornographic content of any kind (drawings or otherwise) that depicts underage people in any context is something I think should be illegal and avoided at all costs, but I’m highlighting that there is edge-cases in everything.
I mean, perhaps in the most general sense that is technically true. For example, there have been cases about this that have come from parents taking pictures of their kids in the bathtub, even if the charges were eventually dropped. If that particular court case had gone differently, it might’ve set a very destructive precedent that served only to rip apart families.
Still, 99% of the cases that produce this material are done so in an exploitative and abusive context; definitely not arguing with that. No idea what Aaron was talking about in that particular link, but this is the one counterexample that I think of that is valid, assuming it went a different direction in court.
Tbh, I haven’t really had this issue in a few weeks. I’m tempted to think it’s usage-related, and could possibly indicate that my memory allocation for the DB is still too high.
Like I said, I’m aware of extant measures to try and steer models, but people often assume a level of craftsmanship in censoring models that simply does not exist. Jailbreakchat.com is an endless stream of examples of this very fect; it’s very hard, especially with the limited context lengths of current models, to effectively give them any hard directives.
And back to foundational models, which are essentially free of censorship, they will still exhibit a similar level of political bias unless prompted otherwise. All this to say that, discounting OpenAI’s attempts to control their models, the model itself will inherently learn from and mirror the real-world biases of the text it was trained on. Those biases happen to fall along lines that often ignore subtlety in debates regarding illegality and morality.
It’s hard to say what LLMs are “programmed” to do, as they’re largely untamed beasts of text prediction. In fact, I would suspect its built-in biases are less the result of pre-prompting or post-foundational-model training and really just what a lot of people tend to think online. In a way, it’s more like people in general often equate illegality with immorality.
You can see similar biases in many of the open-source LLMs that are floating around. Even though they’re basically built outside of large corporate cultures and large-scale monetary incentive, they still retain a lot of political bias that tends to favor governmental measures heavily.
ChatGPT: Your argument is invalid because it doesn’t change the legal reality of things.
Me: The legal reality needs changed.
The max_client_body_size setting in a few of the nginx config files needs increased. I don’t have the link handy, but it’s basically something you need to edit in both the /srv/lemmy internal config and several of the nginx external ones. The external ones are in a standard path for nginx installs, so a quick look for help on StackOverflow is how I got mine sorted.
You can if you want. Reply here with the link if you do (or mention me if that’s a thing on Lemmy).
Yeah, mine have technically happened after reboots, although things typically take a few days at least for the problem to creep up. This past time, I basically have a whole entire week in before things went to crap.
I did that a while ago, and unfortunately, it didn’t really help. I don’t think it’s an issue of RAM, but rather a daemon or something periodically going nuclear with resource utilization. A configuration issue, perhaps?
The problem is that an update will inherently involve a restart of everything, which tends to solve the problem anyway. Whether the update fixed things or restarting things temporarily did is only something you can find out in a few days.
I’ll save this to look at later, but I did use PGTune to set my total RAM allocation for PostgreSQL to be 1.5GB instead of 2. I thought this solved the problem initially, but the problem is back and my config is still at 1.5GB (set in MB to something like 1536 MB, to avoid confusion).
This issue occured a few weeks ago as well, even when we had very little traffic. We still have peanuts when compared with other instances.
Oh, and for completeness:
We’ve deleted the vast majority of the spam bots that spammed our instance, are currently on closed registration with applications, and have had no anomalous activity since.
Our server is essentially always at 50% memory (1GB/2GB), 10% CPU (2 vCPUs), and 30% disk (15-20GB/60GB) until a spike. Disk utilization does not change during a spike.
Our instance is relatively quiet, and we probably have no more than ten truly active users at this point. We have a potential uptick in membership, but this is still relatively slow and negligible.
This issue has happened before, but I assumed it was fixed when I changed the PostgreSQL configuration to utilize less RAM. This is still the longest lead-up time before the spikes started.
When the spike resolves itself, the instance works as expected. The issues with service interruptions seems to stem from a drastic increase in resource utilization, which could be caused by some software component that I’m not aware of. I used the Ansible install for Lemmy, and have only modified certain configuration files as required. For the most part, I’ve only added a higher max_client_body_size in the nginx configs for larger images, and have added settings for an SMTP relay to the main config.hjson file. The spikes occured before these changes, which leads me to believe that they are caused by something I have not yet explored.
These issues occured on both 0.17.4 and 0.18.0, which seems to indicate it’s not a new issue stemming from a recent source code change.
The Ansible install does make things a lot more simple, but it’s still pretty involved if you’re new to self-hosting in general. For example, you might need to set up an SMTP relay if you can’t port forward a workable port, and you also will probably want to change your Nginx configs to allow uploading larger images than a single megabyte.
I had no idea FOSS tax software was a thing. Huh. I’ll try and play around with it at some point and let you know.