This is why we need 3, 4, or even 5 monitors at a time.
Individualist, Capitalist, Objectivist, Liberal, Transhumanist. Linux User + Certified, Programmer (Web Dev, Rust, a little Python), AI Tinkerer (Mostly Stable Diffusion), Gamer, Science Lover, #NAFO🇺🇦
This is why we need 3, 4, or even 5 monitors at a time.
Forever young…
Damn, I didn’t know that!
I’m not sure I understand, what’s wrong with this commit?
I don’t know if it’s the “TOS-Breaking” you’re looking for, but I’ve been using Forkgram for a while now and really appreciate the QOL improvements it has, as well as the ability to hide the Premium stuff you aren’t using.
Yeah, this meme is now very dead for me as well. xD
I learned something today… and I’m not better off for having learned it. What a dumb ass virtue signal to use on something.
There are plenty of ways to convince people to take privacy and security more seriously, but this isn’t it. This is more likely to make people not take it seriously. Spotify Wrapped is a fun little gimmick that a lot of folks appreciate. Heck even I did, and I only rarely use Spotify.
14/20, nicely done! I will say, I maybe would have wanted to see more AI results than just from Dall-e 3, though given I still missed 6 of the, that speaks very highly of Dall-e 3’s capabilities. But some midjourney and SDXL images would have made for a wider guessing selection too.
It’s honestly remarkable how few people in the comments here seem to get the joke.
Never stop dissecting things, y’all.
Imagine if someone had said something like this about the 1st generation iPhone… Oh wait, that did happen and his name was Steve Ballmer.
Heh, I’ll just leave this here for folks.
Actually a useful bot!
Fantastic resource! I’ve contributed to it myself as well.
This has already been disproven, due to the fact the method the researchers used to test how well it was doing was flawed to begin with. Here is a pretty good twitter-thread showing why the methods they used were flawed: https://twitter.com/svpino/status/1682051132212781056
TL:DR: They used an approach of only giving it prime numbers, and asking it if they were prime numbers. They didn’t intersperse prime and non-prime numbers to really test it’s capabilities at determining that. Turns out that if you do that, both the early and current versions of GPT4 are equally bad at determining prime numbers, with effectively no change noted between the versions.
"Press F to o7 "
Yeah, if you don’t mind it possibly taking a week to download something… Really like the idea, but in practice it’s very slow for something like that, unless you got a lot of seeders for something maybe.