I’ve always hated Crustyroll.
Crustyroll got it’s start by standing on the backs of good noble fansubbers who provided their subs for free; and now they’ve come full circle. They became an enemy rather quickly when it profited them.
I’ve always hated Crustyroll.
Crustyroll got it’s start by standing on the backs of good noble fansubbers who provided their subs for free; and now they’ve come full circle. They became an enemy rather quickly when it profited them.
I am glad to see it when the selfish people at the top fall so far down the hill. They orchestrate their own falling typically, much like Ikarus in his waxen wings, falling when he flew too close to the sun in direct sunlight at the height of a hot summer’s day.
As for Google; I hope the DoJ not only pulls up all of the resultant weeds in the garden, but also makes sure to till and salt the soil thoroughly, so that no part of Google can ever hope to rejoin it’s other pieces to form a monopoly or ‘anything like a monopoly’ on anything, ever, again.
Google must rightfully suffer a most painful and enduring ‘Corporate Death Penalty’ so to speak; in order to ensure that no company ever gets so bold again. We must also repeat this with several other large companies like Microsoft, Amazon and Apple too; as well as a few other companies I’m unable to name because I’m unaware of how ridiculously massive and monopolistic they are.
This is exactly the kind of task I’d expect AI to be useful for; it goes through a massive amount of freshly digitized data and it scans for, and flags for human action (and/or) review, things that are specified by a human for the AI to identify in a large batch of data.
Basically AI doing data-processing drudge work that no human could ever hope to achieve with any level of speed approaching that at which the AI can do it.
Do I think the AI should be doing these tasks unsupervised? Absolutely not! But the fact of the matter is; the AIs are being supervised in this task by the human clerks who are, at least in theory, expected to read the deed over and make sure it makes some sort of legal sense and that it didn’t just cut out some harmless turn of phrase written into the covenant that actually has no racist meaning, intention or function. I’m assuming a lot of good faith here, but I’m guessing the human who is guiding the AI making these mass edits can just, by means of physicality, pull out the original document and see which language originally existed if it became an issue.
To be clear; I do think it’s a good thing that the law is mandating and making these kinds of edits to property covenants in general to bring them more in line with modern law.
Keybase is better than Signal. You may not like it’s current owners but it still works, still functions, and can be used to chat privately. It’s entirely OSS on the client side; and server-side software isn’t provided; but with an open Client; it’s likely trivial to reverse and re-implement your own. (Keybase itself doesn’t provide their server code; it’s private due to abuse constraints)
Keybase is End to End Encrypted. It may not be as “feature rich” but all features are private.
I’m not sure if it’s indev anymore though; and it does allow you to be as public or as private as you’d like to be about your identity.
Such a system might be constructed for one’s own scraping needs by taking any one of the current frontend/backends and customizing that behavior such that it could mitigate issues or ingest/ignore data based on your own inputs as well; such that your model could be “riding along on a human surfboard with human guidance”
The filtration capabilities available to most users is pretty robust; depending on what you use to interact with the Fediverse. I thinik it would be possible to filter out problematic bots, users and even whole domain sources with the right kind of software.
I’m going to be bold enough to say we don’t have as wide of an AI/LLM issue on the Fediverse as the other platforms will have.
I’m certain that if someone did collect data from the Fediverse; it would become a hot topic and it might not be enough data anyways as the Fediverse is not mainstream enough normally. So the data and language collected here might skew in a few imaginable ways that one might find undesirable for a general model of word frequencies.
Also the fact that people might not appreciate that data being collected. Let’s be real. It’s too soon for such a project to begin. The AI TREND MUST DIE as it currently lives and it’s corpse must be rotted away completely. Now, in internet time that may not be all that long…a few to several years…the memory of the internet can be short-lived at times. It must, however, fade from the public conscience into some obscurity first.
Once the technology no longer lies in greedy hands again; new development can begin anew.
It occurs to me that adding a visual watermark might actually serve to obscure a visual watermarking scheme that is otherwise invisible by providing data that scrambles or breaks the watermark decoder itself.
Audio watermarks can be distorted in any number of ways; and it could be that some of the wildly poor audio quality in most cam-rips is probably the only way you can defeat the watermark; by using a LQ microphone and encoding the audio to a very limited bitrate and then re-upsampling; to defeat any subtle alterations a digital watermark might make to the audio waveform.
Watermarks are only an issue in-as-much as it is used to trace down which copy was leaked.
With modern digital projection systems; you don’t get a reel of film; you get a briefcase of [SS/HD]Ds containing the raw, encrypted, footage. The digital projection system will decrypt using provided keys. There’s no output except the standard ones for the theatre projectors and sound systems…so capturing the output is difficult.
If you do intercept the signal; the projection system might detect it; and refuse playback or wipe the decryption keys. Watermarking is also a danger; since your theater can get identified as the leak source and sued.
I’m not accounting for State laws; which may in fact be stricter. I’m talking about Federal Laws which might not explicitly forbid such things; so long as they’re done in an actually safe manner by professionals.
But, as I said before, if the DEA believes it has the power to stop that none-the-less; that’s what they will do, without respect to if the law is actually legally unclear or borderline. Unfortunately many pharmaceutical places don’t care to invite the wrath of the DEA; even if what they’re doing could be considered permissible; so long as they do not synthesize an exact drug that the Feds specifically name as a controlled substance.
Again; IANAL either. But I do think there’s a lot of room for small compounding pharmacies to synthesize various drugs to meet a patient’s needs quickly while waiting for proper shipments to arrive. There’s lots of compounds that are life-sustaining that do not fall under the DEA banner of authority.
I’ll just leave this here.
Depending on how Vyvanse is Scheduled; it might be legal to privately make. If it’s not scheduled like a standard amphetamine; the DEA is powerless.
I have a sneaking suspicion it’s not illegal to compound this stuff. But IANAL; and it doesn’t matter if the DEA thinks it is and will hassle anyone trying.
I firmly think this would be a boon for many people; owning one of these is likely a lifeline that even small town physicians could utilize to dispense drugs freely or cheaply to patients in need.
This is something that I think small-town pharmacies could use to create compounds in cases of drug shortages. I think tools and programs and small labs like what are discussed in the article are a positive force for good; and that they should be not only allowed, but encouraged, for many drugs that are expensive, unavailable to someone in need and can be readily synthesized safely with a basic college level of chemistry training by someone in a pharmacy.
I think the potential risks and downsides are small right now; and I think more of it should be encouraged gently so that we can find out quickly what the flaws and limitations are so that we can put regulatory guardrails around it so that people do not harm themselves.
It feels like this vulnerability isn’t notable for the majority of users who don’t typically include “Being compromised by a Nation-State-Level Actor.”
That being said; I do hope they get it fixed; and it looks like there’s already mitigations in place like protecting the authentication by another factor such as a PIN. That helps; for people who do have the rare threat model issue in play.
The complexity of the attack also seems clearly difficult to achieve in any time frame; and would require likely hundreds of man-hours of work to pull off.
If we assume they’re funded enough to park a van of specialty equipment close enough to you; steal your key and clone it; then return it before you notice…nothing you can do can defend against them.
I’m certainly concerned that now that this software has been covered in PopSci; that it will certainly suffer a needless onslaught of DMCA and other lawsuit-related shenanigans. >_>
No; Piracy won’t stop.
Analog loopholes still exist; and cannot be eliminated completely from the chain. Enterprising crackers will tinker and find weaknesses in systems. People will find bypasses, workarounds, and straight up just crack whole encryption schemes that were badly implemented.
Encryption was never intended to protect content. It was intended to protect people. In the short term; sure, DRM and encryption can protect profits. In the long term, it provably cannot and does not. Oftentimes it gets cracked or goes offline; and the costs associated with keeping authentication servers up for long enough to keep lawsuits off your back is provably large and difficult to scale. I would even assert that it costs more to run DRM than it saves anyone in ‘missed profits’.
Frequently companies also argue that it saves profits by recapturing “lost sales”; but that’s provably false. A consumer, deprived of any other viable choice, will in fact, just not buy the thing if they cannot buy it for what they deem as a fair price. It has also been proven; that if they can acquire the content freely; they will oftentimes become far more willing to buy whatever they acquired or even buy future titles. When a customer trusts; they may decide to purchase. But why should a customer trust a company that does not trust them?
To be clear; the Nintendo Switch tends to trade fluently in cryptographic certificates.
The MiG Switch has one of these certificates; one it’s creators likely copied from a legitimate Nintendo Switch game title. All games have such certificates and they are uniquely serialized; much like a GUID or UUID would be. These certificates are signed by the Game Dev studio, and then Nintendo in a typical certificate signing chain scheme; Nintendo signs the Game Dev Studio cert, which signs the Title certificate, which signs the unique cart or digital copy cert.
This banning is usually achieved by banning either the lowest certificate in the chain or the one directly above it; or even the Dev Cert if it was compromised.
So the MiG Switch carts are likely hardware banned. Your Nintendo Switch probably advertises to Nintendo which cart(s) were inserted into it recently by sharing the fingerprints of the certificates. Then Nintendo can basically kill the certificate assigned to your Switch system and prevent you from connecting online; as your Switch uses it’s own system cert to identify itself to Nintendo services.
In all cases this is un-evade-able when connecting to the internet; as Nintendo Switch system certs are burned into a PROM chip on the main board at manufacture. This chip is a WORM chip, which can only be written once and read many billions of times.
A critical part of the way they try and curb cheating in online play is checking the integrity of the runtime environment; which includes checking what titles were launched recently; and if that happens to include a certificate they’ve banned for being cloned by the MiG Switch; then you’ll quickly be banned by their anti-cheating hammer.
Most important is those checks typically don’t take place naturally; they only occur when you’re connecting to the EShop, or connecting to NN to play multiplayer online. The devil therein unfortunately lies in the details; and if you’ve ever purchased a Digital Title that means your Switch is regularly connecting to the EShop to renew Digital License Tickets needed. They tend to expire every 72 hours and must be renewed by presenting an expired Ticket, a valid Ticket Granting Ticket (given to your Switch when you buy the title) and contacting “Mommy Nintendo” and asking “Mommy, May I?”. Yeah. DRM sucks.
If all goes well; your Switch gets a shiny new set of tickets. Unfortunately Nintendo was paying attention to requests and will issue out regular waves of bans for systems detected cheating. You won’t know when this will happen, and it won’t prevent Nintendo from letting you play your games; you’ll just suddenly find your Switch banned from online play after such ban waves.
Like a Hydra; You cut one head off; and two grow in it’s place.
Yeah this seems like a non-issue to me as well; the source material for the models is probably the cause of this bias.
I also don’t think there’s a lot of sources for this manner of speaking. Let’s also not forget that there’s oftentimes instructions given to the LLM that ask it to avoid certain topics which it will in fact do.
Honestly I question the sanity of allowing a child to have an actual clearanced job and not brag about it to his friends. Mentally you’re pretty much a kid until you’re about 25 or so if you’re AMAB.
I’m concerned that higher clearances aren’t checking people for signs of stupid viewpoints before they’re cleared.