• 0 Posts
  • 37 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle





  • I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in “strawberries” thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said “really?”, and it corrected itself once again.

    LLMs are very useful as long as know how to maximize their power, and you don’t assume whatever they spit out is absolutely right. I’ve had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I’ve found some of the simplest errors in the middle of a lot of helpful things. It’s at an assistant level, and you need to remember that assistant helps you, they don’t do the work for you.


  • The first publishing of “Limits to Growth” suggested that if immediate actions were done to curtail growth and use of resources, the world could possibly in many decades peak and then come back down to a sustainable flat line. That was in 1970. 54 years ago we may have had a chance - although the research didn’t include many things not known to them, including the impact of climate change that was already underway and just not obvious (the ocean was buffering much of the effects for a long time).

    My non-scientific opinion is that crossing the line of hunter-gatherer to agriculture was the real point of no return. We gained a lot from that, but it also sealed our path and fate. Finding the rich energy source of petroleum was the final accelerant.


  • Only if it changes laws of physics. Which I suppose could be in the realm of possibility, since none of us could outthink a ASI. I imagine three outcomes (assuming getting to ASI) - it determines that no, silly humans, the math says you’re too far gone. Or, yes, it can develop X and Y beyond our comprehension to change the state of reality and make things better in some or all ways. And lastly, it says it found the problem and solution, and the problem is the Earth is contaminated with humans that consume and pollute too much. And it is deploying the solution now.

    I forgot the fourth, that I’ve seen in a few places (satirically, but could be true). The ASI analyses what we’ve done, tries to figure out what could be done to help, and then suicides itself out of frustration, anger, sadness, etc.