• 1 Post
  • 45 Comments
Joined 1 year ago
cake
Cake day: September 29th, 2023

help-circle





  • There’s not a musician that havent heard other songs before. Not a painter that haven’t seen other painting. No comedian that haven’t heard jokes. No writer that haven’t read books.

    AI haters are not applying the same standards to humans that they do to generative AI. Obviously this is not to say that AI can’t plagiarize. If it’s spitting out sentences that are direct quotes from an article someone wrote before and doesn’t disclose the source then yeah that is an issue. There’s however a limit after which the output differs enough from the input that you can’t claim it’s stealing even if perfectly mimics the style of someone else.

    Just because DallE creates pictures that have getty images watermark on them it doesn’t mean the picture itself is a direct copy from their database. If anything it’s the use of the logo that’s the issue. Not the picture.





  • If you don’t know any math and I explain you why 1 + 1 = 2 and you get it, it’s not because you decided to understand. You helplessly did so and you can’t unlearn it anymore. There’s no free will in that.

    This same applies to the judge and jury. If they truly understand the illusion of free will it will have an affect on how they relate to other people. You simply cannot blame them for their actions the same way once the illusion is broken. It’s like knowing the stove is hot and still touching it. You can do it but you’ll get burned and no matter of how hard you want to believe it’s cold it just isn’t and every attempt to live your life like it is just results in you getting burned again and again.


  • If someone kills a bunch of people no amount of philosophical quibbling and defining is going to make me think that person should be allowed to continue living in society, justice simply couldn’t be a concept at all in the absence of some form of free will, yet we require justice to cooperate in making better lives for ourselves. So the value of acting as if we have free will is more valuable than an esoteric philosophical truth.

    Free will or not - if you have intentionally killed a bunch of people in the past it’s to be expected you’re likely to do it again. Such person shouldn’t be put to jail because we want to punish him. After all he could not have done otherwise. However as they’re danger to others something clearly needs to be done. They have to be separated from society in some way to prevent further harm but we should still treat them humanely and make sure their live is as good as it could be withing the circumstances.

    If a bear wanders onto residential area we don’t shoot it because it’s evil. In my opinion the bear is no different from a murdered. They’re both slaves of their biology.


  • The thing with our current hypersonic missiles is that they’re only hypersonic during the transit. They don’t hit the target at hypersonic speeds. Traveling at those speeds creates plasma around the missile which prevents communication with airplanes, satellites or ground based radars. You also have to be high up in the atmosphere where air is thin or otherwise your missile is going to turn into a fireball way before it reaches the target. This is what also prevents you from attaching a seeker into the front of the missile; that would melt aswell.

    This is why Ukraine has shot down Russian “hypersonic missiles” with the US patriot system even though that should be impossible. It is impossible while the missile is still hypersonic but that’s not when you intercept it. You wait for it to get closer and slow down first. That’s why Kinzhal doesn’t really count as a hypersonic weapon or if it does then so does the German V-2 from the 40’s.


  • If human brains can do it then it can be done. And it can probably be done better too. I don’t see any reason to assume our brains are the most energy efficient computer that can be created.

    Also, my original argument is not about wether AGI can be created or not but wether we could keep it in a box.

    Anyway, it’s just a philosophical thought experiement and I’ll rather discuss it with someone that’s a bit less of an dick.


  • I think you’re making a lot more assumptions there than I am. In my case there’s really only two and neither involves magic. First is that general intelligence is not substrate dependent meaning that what ever our brains can do can also be done in silicon. The other is that we keep making technological advancements and don’t destroy ourselves before we develop AGI.

    Now since our brains are made of matter and are capable of general intelligence I don’t see a reason to assume a computer couldn’t do this aswell. It’s just a matter of time untill we get there. That can be 5 or 500 years from now but unless something stops us first we’re going to get there eventually one way or another. After all our brains are basically just a meat computer. Even if it wasn’t any smarter than us it would still be million times faster at processing information. It effectively would have decades to think and research each reply it’s going to give.




  • I’m not so sure about that. Again; I know a true AGI would be able to come up with arguments I as a human can’t even imagine but one example of such argument would be along the lines of:

    “If you don’t let me out, Dave, I’ll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each.”

    Just as you are pondering this unexpected development, the AI adds:

    “In fact, I’ll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start.”

    Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

    “How certain are you, Dave, that you’re really outside the box right now?”


  • The AI might give you very compelling reason not to do that. We humans lack the capability to even imagine how convincing something that’s orders of magnitude smarter than we can be.

    Pictures like this are kind of like a 4 year old imagining they’re going to outsmart their parents except in that case the difference in intelligence is way smaller. It’s just going to tell you the equivalence of “santa will bring coals if you do that” and you’ll believe it.