Over time, it is. It’s eliminating the source. In Terminator, Matrix, and others they say that the AI took a split second to act, but our AI doesn’t have those connections. It’s working with what it’s got.
Over time, it is. It’s eliminating the source. In Terminator, Matrix, and others they say that the AI took a split second to act, but our AI doesn’t have those connections. It’s working with what it’s got.
Robber beats you up, knocks you down, and takes your wallet.
“Hey, that’s my money?”
“Tell you what, I’ll just take 30% and call it peace.”
“Okay…just don’t do it again, right?”
Robber runs away laughing.
The Reagan-Carter election is the first one I vaguely remember as a kid, and to have the news of the freed hostages get announced at the inauguration seemed so convenient even for a politically uninformed kid. Yet I heard so much of “see, he got elected and got them free!” NO, Carter did the work, dumbasses. At the cost of his reelection.
I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in “strawberries” thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said “really?”, and it corrected itself once again.
LLMs are very useful as long as know how to maximize their power, and you don’t assume whatever they spit out is absolutely right. I’ve had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I’ve found some of the simplest errors in the middle of a lot of helpful things. It’s at an assistant level, and you need to remember that assistant helps you, they don’t do the work for you.
The first publishing of “Limits to Growth” suggested that if immediate actions were done to curtail growth and use of resources, the world could possibly in many decades peak and then come back down to a sustainable flat line. That was in 1970. 54 years ago we may have had a chance - although the research didn’t include many things not known to them, including the impact of climate change that was already underway and just not obvious (the ocean was buffering much of the effects for a long time).
My non-scientific opinion is that crossing the line of hunter-gatherer to agriculture was the real point of no return. We gained a lot from that, but it also sealed our path and fate. Finding the rich energy source of petroleum was the final accelerant.
Only if it changes laws of physics. Which I suppose could be in the realm of possibility, since none of us could outthink a ASI. I imagine three outcomes (assuming getting to ASI) - it determines that no, silly humans, the math says you’re too far gone. Or, yes, it can develop X and Y beyond our comprehension to change the state of reality and make things better in some or all ways. And lastly, it says it found the problem and solution, and the problem is the Earth is contaminated with humans that consume and pollute too much. And it is deploying the solution now.
I forgot the fourth, that I’ve seen in a few places (satirically, but could be true). The ASI analyses what we’ve done, tries to figure out what could be done to help, and then suicides itself out of frustration, anger, sadness, etc.
Wasn’t some of that because no one wanted to be the guy to tell him something’s wrong? Eg. the Downfall scene.
Right, it’s not about learning anything. They know what they’re doing. They’re making money.
On the flip side, it’s difficult to build enough momentum to change things. Your last line can be used for virtually any of the political systems in the world. Even the “free” ones.
Much harder when any differences of opinion results in a “disappearance”.
It’s much cheaper to tell everyone there’s stealth planes over there…than to send any.
Aka Robin Williams joke - instead of building all these stealth bombers, you put some wreckage in the desert and say one of them crashed.
Wow, an active socialist. Helping people. We can’t have that in Washington. Right, Republicans?
Bringing down a helicopter is just a matter of removing the miracle that keeps it up there. I’ve always been wary of them, but after seeing that one tragic Ring video of the small helicopter that just came apart in midair and a straight plummet down. Never. I mean they are great strategically, but when they fail…it’s pretty complete.
Pilots spend an insane amount of their non-flight time in simulators doing this very thing, to the point where when things do go wrong they subconsciously know the routine to address it. There’s no time to think about a reaction in many cases. And now they want to just have one person be at that alert level all the time.
And I thought that fatigue was an ongoing problem with pilots now, but I’m sure having just one person focused the whole flight won’t hurt.
A compromise - have one pilot, but also require there be at least one passenger who has flown before, or at least messed with Flight Simulator at one time in their life.
They’re looking at money and forgetting why there’s rules.
He picks the outcome that’s the simplest to explain. But you could explain it as a sadistic goal, because look at what second Thanos wanted to do upon learning the universe didn’t appreciate him the first time. Kill it all.
Neither addresses the problem, they just both push it into the future. Half the population/double the resources isn’t even a reasonable amount to give much more time. It’s better for drama though, because disappearing 99% or more of the universe would have really set back the Avengers, if any of them made the cut at all.
Or the frame quality drops, and we’re all Jerry. “My man!”
Who’s going to buy them, Ben? Aquaman?
Have to wonder why he stayed with the party when so much has changed since then.