• 0 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.





  • The generation that came of age in the peak of the “greed is good” era?

    I can’t speak for all of Gen X, but speaking for myself and everyone I personally know from my generation: we never liked that shit. That was our parents’ bullshit. We just couldn’t do anything about it, politically speaking, when we came of age because we were firmly outnumbered by boomers. We still are actually, except now we’re also outnumbered by millennials. That’s why all the media discussions of this topic are framed as “boomers vs. millennials.” Gen X is rendered politically invisible by its comparatively small size.


  • There are some viruses that have targeted Linux, but they’re rare compared to other platforms and their ability to spread is relatively low. One of the main reasons is just down to how software tends to be installed on each platform. Viruses have an easier time spreading on Windows or OSX where users are more accustomed to downloading an executable and running it. Once there’s a malicious running process, it has a comparatively high chance to spread because it can attempt to escalate its privileges either by exploiting a bug or socially engineering the user to click through a privilege escalation prompt. That entire workflow is practically nonexistent on Linux, users just don’t tend to download and execute random binaries. Instead most Linux software gets delivered in one of these ways, each of which has impediments that reduce the chance a virus could spread:

    • through an OS repo; it would be difficult for a malicious actor to get a virus through the release process and into a trusted repo
    • through a public source like Github; again it would be difficult for a malicious actor to get a virus into public source code without someone noticing
    • through a container image from an image library like DockerHub; I believe a malicious container would be sandboxed, making it hard if not impossible for that container to take over the host system
    • through an application image like a snap, flatpak or appimage; again, I believe these run in their own sandbox from which they would have difficulty breaking out

    There are some exceptions, for example some companies like Hashicorp will distribute their stuff as precompiled binaries. Even in that case you’re probably fine as long as you don’t run the downloaded binary as root. Users in the habit of downloading strange binaries from sketchy places and running them as root just aren’t very common among the Linux userbase. I’m sure there are some (and they should really stop doing that), but there aren’t enough of them to allow a virus to spread unchecked.


  • I don’t think dedicated antivirus software is really required anymore. I haven’t run third-party AV software on any of my systems in the last decade.

    On Windows, the built-in Windows Defender is good enough for most use cases. When it first launched Defender had a pretty bad track record at stopping viruses, but now it routinely ranks at the top.

    On Linux, antivirus software has never really been required. One major exception I can think of would be if you’re running a file server or mail server that talks to OSX or Windows systems. Even then the AV software isn’t really there to protect the server, it’s there to make sure you don’t pass malware or viruses to those non-Linux clients.







  • People here seem partial to Jellyfin

    I recently switched to Jellyfin and I’ve been pretty impressed with it. Previously I was using some DLNA server software (not Plex) with my TV’s built-in DLNA client. That worked well for several years but I started having problems with new media items not appearing on the TV, so I decided to try some alternatives. Jellyfin was the first one I tried, and it’s working so well that I haven’t felt compelled to search any further.

    the internet seems to feel it doesn’t work smoothly with xbox (buggy app/integration).

    Why not try it and see how it works for you? Jellyfin is free and open source, so all it would cost you is a little time.

    I have a TCL tv with (with google smart TV software)

    Can you install apps from Google Play on this TV? If so, there’s a Jellyfin app for Google TVs. I can’t say how well the Google TV Jellyfin app works as I have an LG TV myself, so currently I’m using the Jellyfin LG TV app.

    If you can’t install apps on that TV, does it have a DLNA client built in? Many TVs do, and that’s how I streamed media to my TV for years. On my LG TV the DLNA server shows up as another source when I press the button to bring up the list of inputs. The custom app is definitely a lot more feature-rich, but a DLNA client can be quite functional and Jellyfin can be configured to work as a DLNA server.






  • There used to be three “streams” for Ontario secondary schools:

    • Academic - these were kids that teachers expected would go to universities after high school (if you’re from the US, universities in Canada are analogous to US colleges)
    • General - these were kids that teachers expected would go to colleges or trade schools after high school (if you’re from the US, colleges in Canada are analogous to US trade schools or maybe lower-tier colleges)
    • Basic - these were kids that teachers expected to do menial jobs straight out of high school

    Those descriptions may be harsh, but they were absolutely the unwritten-yet-widely-understood implications of being put in one of these streams. I understand that later on the Basic and General streams were merged into a single “Applied” stream, but that’s the gist. The course focus in each stream was tuned toward those expectations. Academic kids were steered toward math and science, General kids toward shop classes, and Basic kids toward courses like Home Economics.

    The stream system was basically the Pygmalion effect applied to a couple generations of Ontario students. Kids put in the Academic stream tended to succeed because educators expected them to succeed, and thus educators would be more likely to give those students the attention and resources required for success. Kids put in a lower stream tended not to succeed again because that’s what educators expected, causing those educators not to apply the same focus or opportunities as they would on kids in the Academic stream. It was absolutely self-fulfilling prophecy type stuff. There was also a hugely problematic racial component to the system. For example black students were far less likely to be placed in the Academic stream. It is/was a truly awful system and its death is long overdue.

    Full disclosure: I directly benefited from the system, because I was a white kid put in the Academic stream. At the time I thought it was normal, but in hindsight I can see how unfair that system was.


  • You could install LineageOS on your existing phone instead of upgrading. The OnePlus 7 Pro is supported. The install process can be daunting depending on your technical skills, but it’s a one-time process since the phone gets updates over-the-air after the OS is installed.

    I did this with my OnePlus 6 a few months ago and the experience has been good. Switching to LineageOS bumped Android to version 13, whereas it was stuck on Android 11 on stock OnePlus firmware. I’m getting regular updates again, including open-source Android security patches. Not everything gets patched though, some of the core firmware is proprietary to OnePlus and that cannot be patched by anyone but them. It’s letting me extend the life of a phone still works well and has a 3.5mm headphone jack.