• 0 Posts
  • 173 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • This will be almost impossible. The short answer is that those pictures might be 95% similar but their binary data might be 100% different.

    Long answer:

    Images are essentially a long list of pixels, each pixel is 3 numbers for Red, Green and Blue (and optionally Alpha if you’re dealing with a transparent image, but you’re talking pictures so I’ll ignore that). This is a simple but very stupid way to store the data of an image, because it’s very likely that the image will use the same color in multiple places, so you can instead list all of the colors a image uses, and then represent the pixels as the number in that list, this makes images occupy a LOT less space. Some formats add to that, because your eye can’t see the difference between two very close colors, they group all colors that are similar into one only color, making their list of colors used on the image WAY smaller, thus having the entire image be a LOT more compressed (but you might noticed we lost information in this step). Because of this it’s possible that one image choose color X in position Y, while the other choose Z in position W, the binaries are now completely different, but an image comparison tool can tell you that color X and Z are similar enough to be the same, and they account for a given percentage of the image depending on the amount minimum of the values Y and W. But outside of image software, nothing else knows that these two completely different binaries are the same. If you hadn’t loss data by compressing get images in the first place you could theoretically use data from different images to compress (but the results wouldn’t be great, since even uncompressed images won’t be as similar as you think), but images can be compressed a LOT more by losing unimportant data so the tradeoffs are not worth it, which is why JPEG is so ubiquitous nowadays.

    All of that being said, a compression algorithm specifically designed for images could take advantage of this, but no general purpose compression can, and it’s unlikely someone went to the trouble of building a compression for this specific case, when each image is already compressed there’s little to be gained by writing something that takes colors from multiple images in consideration, needing to decide if an image is similar enough to be bundled in together with that group or not, etc. This is an interesting question, and I wouldn’t br surprised to know that Google has one such algorithm to store all images you snap together that it can already know will be sequential. But for home NAS I think it’s unlikely you’ll find something.

    Besides all of this, storage is cheap, just buy an extra disk and move over some files there, that’s likely to be your best way forward anyways.










  • Yeah, I have high hopes for the project, it ticks almost every box for me. I would still prefer to be able to store tags in the actual images and use them and also be able to recover a library already in the proper folder (so in the case of a catastrophic failure, reimporting the full library is a matter of minutes not days, not to mention having to retag people, etc).

    My point is that projects should ask for donations when they’re so early in development, asking for a subscription implies you have a stable product.



  • I don’t mind this model. That being said for me Immich is great but has a fatal flaw that has prevented me from using it: it doesn’t do updates.

    For me that’s a big one, everything else I self host I have a docker compose pointing to latest, so eventually I do a pull and up and I’m done, running the latest version of the thing. In Immich this is not possible, I discovered the hard way that they are not backwards compatible and that if you do that you need to keep track of their release notes to know what you need to manually do to update.

    I haven’t settled on a self-hosted photo management because of this. In theory Immich has almost everything I want (or more specifically, all of the other solutions I found lack something), but having to keep track of releases to do manual upgrades is stupid, this is a software, it should be easy to have it check the version on start and perform migration tasks if needed.



  • You are correct, but missed one important point, or actually made an important wrong assumption. You don’t simulate a 1:1 version of your universe.

    It’s impossible to simulate a universe the size of your own universe, but you can simulate smaller universes, or to be more accurately, simpler universes. Think on videogames, you don’t need to simulate everything, you just simulate some things, while the rest is just a static image until you get close. The cool thing about this hypothetical scenario is that you can think of how a simulated universe might be different from a real one, i.e. what shortcuts could we take to make our computers be able to simulate a complex universe (even if smaller than ours).

    For starters you don’t simulate everything, instead of every particle being a particle, which would be prohibitively expensive, particles smaller than a certain size don’t really exist, and instead you have a function that tells you where they are when you need them. For example simulating every electron would be a lot of work, but if instead of simulating them you can run a function that tells you where they are at a given frame of the simulation you can act accordingly without having to actually simulate them. This would cause weird behaviors inside the simulation, such as electrons popping in and out of existence and teleporting over gaps smaller than the radius of your spawn_electron function, which in turn would impose a limit to the size of transistors inside that universe. It would also cause it so that when you fire electrons through a double slit they would interact with one another, because they’re just a function until they hit anything, but if you try to measure which slit they go through then they’re forced to collapse before that and so they don’t interact with one another. But that’s all okay, because you care about macro stuff (otherwise you wouldn’t be simulating an entire universe).

    Another interesting thing is that you probably have several computers working on it, and you don’t really want loading screens or anything like that, so instead you impose a maximum speed inside the simulation, that way whenever something goes from one area of the simulation to the next it will take enough time for everything to be “ready”. It helps if you simulate a universe where gravity is not strong enough to cause a crunch (or your computers will all freeze trying to process it). So your simulated universe might have large empty spaces that don’t need that much computational power, and because traveling through them takes long enough it’s easy to synch the transition from one server to the next. If on the other hand maximum speed was infinite you could have objects teleporting from one server to the next causing a freeze on those two which would leave them out of synch with the rest.

    And that’s the cool thing about thinking how a simulated universe would work, our universe is weird as fuck, and a lot of those weirdness looks like the type of weirdness that would be introduced by someone trying to run their simulation cheaper.





  • That would be awesome, currently it’s 500GB for their cheaper option which starts at 23/year. I didn’t find an option to increase the bandwidth before completing the order. Also it needs to be deployed in NY (which would be possibly slow for me in Europe). Finally their isos are somewhat old, the latest Ubuntu they have is 20.04 (which has an EoL next year).

    All that being said, 23/year is very cheap for a VPS, and for people in the US that use less than 500GB/month that’s the best deal I’ve ever seen.



  • There are a few misconceptions in your logic.

    1. Force is required to rape
    2. Erections are controllable

    Both of them are easy to disprove, but not obvious at first sight.

    For 1 consider any case where a woman might have power (not physical) over a man, e.g. blackmail, teacher, parole officer, boss, etc. Another possibility to remember are weapons or physical threats to a third party. Also you should remember that humans have a fight/flight/freeze response, so a third of humans would just freeze regardless of being able to overpower their attacker. Finally there’s also the possibility of even without any threat, even being able to think properly, and knowing that he could physically overpower a female attacker, a man might not do it for fear of legal or moral repercussions, e.g. being thought not to hit girls or believing that no one would believe that he was defending himself. In fact lots of women who get raped don’t try to fight back or escape, believing (sometimes accurately) that their attacker would worsen the offense if they did that, e.g. by killing them (even if no threat was made), it’s not uncommon for rape victims to feel ashamed and guilt about not having fought back, and by saying that men can’t get raped because they could theoretically overpower their attacker you’re indirectly saying that any woman who doesn’t fight back with all her might is not being raped either, because they could have overpowered their attacker of they tried.

    For 2, erections (and even ejaculation) are physical responses, in fact you can make a corpse get a hard on and cum (some wives do it to preserve their husbands sperm). This is no different from women getting wet or having orgasms while being raped (both of which are common), it means nothing, it’s just a physical reaction to a physical stimulus. In fact lots of victims (both men and women), especially those in abusive relationships think they deserve that because of those physiological reactions. To put it in simpler terms, saying a men can’t be raped because if they got an erection it means they wanted it is like saying that people can’t be stabbed because if they bled is because they wanted the knife.