Huh, I’ve only heard business logic before.
Huh, I’ve only heard business logic before.
I had this program. It was ok. There were very few titles that actually ran at all, let alone well, on my tangerine iBook or my dad’s G3 desktop. gran turismo 1 and 2 worked, but not full speed. harvest moon back to nature, klanoa, and Spyro 1 did not work. Kinda related but I still have my copy of Halo CE for Mac. Played the original campaign for the first time on that same tangerine iBook.
Napster, 1999.
Easily the biggest loss imo. RIP WCD.
Isn’t UTC meant to be… you know, universal?
Was going to mention this. Finding a smaller community focused on a specific project can afford more collaborative learning while contributing to projects that need help. It’s also a good way to learn humility, like finding that one person in the corner of the office who constantly picks apart your PRs without any emotion or judgement and genuinely improves your own code by learning from mistakes.
Yeah I found which key and that has been a little more useful than nothing, but it’s a half-way solution, like showing shortcut next to menu items in a normal GUI application. Thank you for the suggestion!
I self host services as much as possible for multiple reasons; learning, staying up to date with so many technologies with hands on experience, and security / peace of mind. Knowing my 3-2-1 backup solution is backing my entire infrastructure helps greatly in feeling less pressured to provide my data to unknown entities no matter how trustworthy, as well as the peace of mind in knowing I have control over every step of the process and how to troubleshoot and fix problems. I’m not an expert and rely heavily on online resources to help get me to a comfortable spot but I also don’t feel helpless when something breaks.
If the choice is to trust an encrypted backup of all my sensitive passwords, passkeys, and recovery information on someone else’s server or have to restore a machine, container, vm, etc. from a backup due to critical failures, I’ll choose the second one because no matter how encrypted something is someone somewhere will be able to break it with time. I don’t care if accelerated and quantum encryption will take millennia to break. Not having that payload out in the wild at all is the only way to prevent it being cracked.
I also have been running OpenWRT for over 10 years and I agree, software management is archaic and painful. I exclusively use it for dumb APs now and just use OPNSense upstream to actually manage my network and devices. It’s been pretty nice in that regard for a couple years now.
Yes but on other sites where people have historically posted the shortened direct sub urls that used to go to old.reddit will now forward to new reddit or be broken according to the comments of the OP post. That’s the key difference here. Posts people made many years ago will now go to new reddit, which is why I suspect specifically attempting to manipulate user / site stats because even if their api prevents search engines from using it properly, there’s a wealth of already indexed links out there.
My guess, break existing old links on other sites so that they can show a graph with a drop in usage of less-ad-riddled way to access the website and drive more to the ad enabled views.
This is me wondering… is there anyone who curates and categorizes lists of open source projects actively looking for contributions? Possibly with an organization based on experience level? It’s often hard to tell what project are active enough that entry, intermediate, or experienced level help is needed and for what.
I would rather have super boogers than super bugs.
I really wish rider would respect when I turn it off. It just keeps re-enabling itself.
I’d prefer GNU’s ddrescue just because I find it more robust and has better progress output. It’s functionally the same interface but lets you use a mapfile to resume sessions should anything happen to interrupt the copy.
Arguably I’m against this because you never know what’s going to happen and the conventional wisdom for appliances like this is to just backup any important configs, backup your containers and vms, then do a fresh install from the latest install media on the new disk followed by a restore of the backups. It might take a little more time but it’s negligible and allows you an opportunity to review your current configs, make necessary changes, and ensure your backups are working as intended.
Had no idea they had mod kits! Definitely gonna grab this and the N64 kit.
Tell that to Microsoft!
bool?
In my experience, token limits mean nothing on larger context windows. 1 million tokens can easily be taken up by a very small amount of complex files. It also doesn’t do great traversing a tree to selectively find context which seems to be the most limiting factor I’ve run against trying to incorporate LLMs into complex and unknown (to me) projects. By the time I’ve sufficiently hunted down and provided the context, I’ve read enough of the codebase to answer most questions I was going to ask.
I have the same model, powering 3 machines with an average load of ~125w when it switches to battery power. I have a NUT host on one of the servers which will broadcast the outage for the other machines and the whole stack shuts down after 30 seconds and switches off the UPS at the very end. Gone through about 4 or 5 true power events now and double that in testing (overzealous I know) but the UPS is 2.5 years old now and is doing just fine. I have a spare battery because I heard ~3 years is normal but so far no indication it’s reaching replacement yet.
I think the important thing for these is to not run them down to 0. They’re only good for one event at a time and shouldn’t constantly be switching over without basically a full day of recharging again (more like 16h to recharge).
I can see consistent brownouts and events being a problem for these little machines. I’m planning on upgrading to a rack solution soon and relegating this one to my desktop in the other room (with a fresh battery of course).
If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.