I’m sorry, I meant to respond about the lack of BBC archival footage, as it had to be archived to be able to compile it. You’re right that it was probably shot straight to VHS.
I’m sorry, I meant to respond about the lack of BBC archival footage, as it had to be archived to be able to compile it. You’re right that it was probably shot straight to VHS.
I remember the VHS we watched was presented as a compilation of episodes with a new introduction and interludes so my guess is there was some kind of professional reproduction of the episodes themselves
The groups forming the roots of digital media piracy established ‘the scene’, which holds itself to rules and has particular distribution methods. For example Usenet was popular for many years. https://scenerules.org/
By P2P I’m meaning these are ‘non-scene’ releases, just something a random person on the internet cooked up and released somewhere, in these cases by feeding some prior standard definition release through an upscaler and creating a torrent from the output, which involves certain considerations.
We can’t exactly determine the pedigree of these files, but we can say they are lossy transcodes, that is they first existed in a compressed format and later were re-encoded by the upscaler to another compressed format.
While the upscaled may look sharper to your eyes, data from the files as they were before that process was inevitably lost due to this transcoding. If we define “quality” as the amount of information from the original presentation that was retained in the output, then the standard definition versions are definitely higher in quality than the upscaled ones.
I’m not meaning to use the term in any perjorative sense, but it’s useful information to have. If an official HD presentation is ever made from the original film, it would certainly get a ‘scene release’ that would look better than these ones.
Yes, that is the quality of the original presentation. If anything it looks worse because it has been converted from film to a digital signal, as well as being stretched to be a bit larger than normal. Lmk if you young whippersnappers have any questions about this, I grew up watching this on VHS back in the dark times 👴
Yes, both are upscaled p2p releases
Just used this to load up some concerts for my long haul flights tonight and it worked great, thanks for the rec
I’m really meaning the lack of option not to consume fast-moving consumer goods, rather than the option to pay a premium for them elsewhere. When their market position is similar to like an outlet for government rations except for private profit, their net is essentially what was skimmed off the top of free enterprise. 2.66% is just the current maximum amount that is justifiably worth without doing societal harm
That’s an expected tradeoff of operating an essential service is the point. It’s not as though their margin is that slim by mistake, or out of goodwill, or bad business sense. It’s meant to lead to the situation where we shop at Walmart not by choice, but in lieu of other options.
Losslesscut is what you want for this. It’s basic and concatenates without re-encoding. And it’s open source (as is handbrake)
How do I defederate from posts complaining about lemmy.ml admins being tankies? Hasn’t the instance required manual signup approval for the whole last year? Complaints about it being a flagship instance and the resulting bad look for the platform are total non-sequitur. The problem is absolutely, totally, without caveat, solved by lemmy’s blocking functionality. If you don’t like that they created popular communities, tough titties. At the core of the issue is that these complaints are pushing a decentralized platform to conform to their worldview. That just isn’t how it works.
/rant
2 simultaneous sessions on different devices/IP’s = inconsequential. A 3rd simultaneous session = auth tokens revoked on all authorised devices, and you get a warning that lasts for 1 week. If 3 simultaneous sessions occur again within that time then your account gets removed.
The company made “more than a dozen technical improvements” to AI Overviews …
… making the feature rely less heavily on user-generated content from sites like Reddit
So it prefers the results that Google normally deprioritizes? I guess we have that in common
I’m mostly only using CCWGTV, both the original 4k model and the budget 1080p one. Neither have performance issues for me (except before filtering out 4k releases on the 1080p model)
I’m just aiming for the simplest/smoothest experience as possible, not so much for myself but so that I can mail it out to my mum who lives out in the bush and just tell her to enter her wifi password and open kodi. She’s able to manage from there without having to worry about hdr/dv content compatibility with her display, or default audio language/subtitle display etc.
In kodi you can edit settings.xml for IPTV Simple Client addon to point playlist items to a given category in Seren, make a playlist linking to those categories a favourite, and configure Kodi to open to the favourites menu on launch. That way she has a fully on-rails and custom experience based on her preferences from the point that she runs it.
I’ve heard good things about Stremio + Torrentio. Does it have trakt integration or similar equivalent? I think the discovery in addons that have this makes a big difference. I have many different categories to browse that might sound similar, e.g. Trending, Trending New, Most Watched, Most Popular. But each one has a specific and plainly disclosed ranking methodology and that’s very useful to avoid constantly being recommended to watch The Office, Breaking Bad, cowboy soaps etc
Pay for real-debrid and set up a kodi addon like Seren on a streaming box. You’ll get an equivalent experience to paid/official streaming platforms without having to pay for them all, including browsing popular shows without having to download them ahead of time or manage a home server. It’s still torrenting under the hood, just a lot more convenient
It dominates the market in vertical tabs IMO. I tried Vivaldi, Firefox extension, the works. The best-feeling alternative was Safari
Well, that’d be the mechanism of how GDPR protections are actioned, yes; but leaving themselves open to these ramifications broadly would be risky. I don’t think it’d satisfy ‘compliance’ to ignore GDPR except upon request. Perhaps the issues with it are even more significant when using it as training data, given they’re investing compute and potentially needing to re-train down the track.
Based on my understanding; de-identifying the dataset wouldn’t be sufficient to be in compliance. That’s actually how it worked prior to it for the most part, but I know companies largely ended up just re-identifying data by cross-referencing multiple de-identified datasets. That nullification forming part of the basis for GDPR protections being as comprehensive as they are.
There’d almost certainly be actors who previously deleted their content that later seek to verify whether it was later used to train any public AI.
Definitely fair to say I’m making some assumptions, but essentially I think at a certain point trying to use user-deleted content as a value add just becomes riskier than it’s worth for a public company
Surely the use of user-deleted content as training data carries the same liabilities as reinstating it on the live site? I’ve checked my old content and it hasn’t been reinstated. I’d assume such a dataset would inherently contain personal data protected by the right to erasure under GDPR, otherwise they’d use it for both purposes. If that is correct, regardless of how they filtered it, the data would be risky to use.
Perhaps the cumulative action of disenfranchised users could serve toward the result of both the devaluation of a dataset based on a future checkpoint, or reduction in average post quality leading to decreased popularity over time (if we assume content that is user-deleted en masse was useful, which I think is fair).
Well at least we get ours back when we geoblock them from watching the AFL/NRL grand final. Right? Oh, no one watches it? Oh.
This app live captions any output to your sound device locally https://github.com/abb128/LiveCaptions
Whether I mute my output device, or selectively mute a tab or app it still works fine.
There’s a similar feature baked into Win11, not sure whether that is processed locally/privately though