Currencies going up in value tends to not be great for an economy, as people will save instead of spend. It stops being a currency and becomes somewhat of an asset. A slowly depreciating currency tends to foster the most economic growth.
Doesn’t higher interests mean more money is spent paying those interests, meaning less money is available to spend on other things which in turn reduces the monetary supply in circulation which curbs inflation?
Many military experts have commented that Israel has done a very effective job at minimizing civilian casualties. If if were truly indiscriminate, the death toll would be much, much higher.
The IDF has killed more women and children in the past year than any other conflict in the past two decades: https://www.oxfam.org/en/press-releases/more-women-and-children-killed-gaza-israeli-military-any-other-recent-conflict
According to research in The Lancet, the number of total deaths is at ~186000. That’s everyone including those who died of preventable causes, e.g. hunger or prevetable disease, e.g. direct and indirect deaths.
According to professor Michael Spagat, the estimated percentage of civilians is around 80% (https://aoav.org.uk/2024/netanyahu-got-it-wrong-before-the-us-congress-idfs-clean-performance-in-gaza-is-a-lie/). Even conservative estimates put it at at least 61%, which is the worst for any modern conflict since WW2.
Based on calculations of the demographic of the dead and the living (e.g. how much does the general population differ from the dead? Are there more dead men of fighting age than in the genral population, etc…), it appears that whilst Israel may target Hamas fighters, it takes next to no effort to avoid civilian casualties in the process. For example, the IDF often targets homes of suspected Hamas fighters, taking out entire families.
So I don’t know which “military experts” think Israel is doing a good job on this, because the ones I’m seeing seem to agree on the opposite.
Terrorism is more about the intent rather than the result. Did Israel intend to instill terror in the civilian population or did they genuinely try to target Hezbollah militants (and perhaps didn’t care much about any civilian casualties)?
Nintendo has their own emulators for running these games on newer consoles.
Yes, but at least there they still use “Earth time”, just slowed down. For the moon it gets a little bit more complicated I guess.
Time moves at a different speed due to the moon’s reduced gravity. It’s not just the length of a day.
What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.
This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.
But it’s easy to just define general intelligence as something approximating what humans already do.
No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.
Yes, hence we’re not “right around the corner”, it’s a figure of speech that uses spatial distance to metaphorically show we’re very far away from something.
Not just that, they’ve proven it’s not possible using any tractable algorithm. If it were you’d run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.
Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.
That’s assuming that we are a general intelligence. I’m actually unsure if that’s even true.
That doesn’t mean they’ve proven there’s no pathway at all.
True, they’ve only calculated it’d take perhaps millions of years. Which might be accurate, I’m not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it’s still very unlikely and definitely not “right around the corner” as the AI-bros claim (and that near-future thing is what the paper set out to disprove).
Haha it’s good that you do though, because now there’s a helpful comment providing more context :)
I was more hinting at that through conventional computational means we’re just not getting there, and that some completely hypothetical breakthrough somewhere is required. QC is the best guess I have for where it might be but it’s still far-fetched.
But yes, you’re absolutely right that QC in general isn’t a magic bullet here.
The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that’s a billion times faster than what we have now, perfect training data that you can sample without bias and you’re only aiming for an AGI that performs slightly better than chance, it’s still completely infeasible to do within the next few millenia. Ergo, it’s definitely not “right around the corner”. We’re lightyears off still.
They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what’s even remotely feasible. And that’s provided you don’t even have to deal with all the constraints that exist in the real world.
We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won’t improve or anything, it’ll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.
It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we’re not as smart as we think we are either.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.
There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.
And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
This is a gross misrepresentation of the study.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.
China/russia/middle east not allowing it, is not the same as not being available. Did you even check the coverage map before replying.
So can you use it or is it not available then? And yes, I checked that map, where else do you think I got the list from??
Astronomers complain about light bleed from ground cities as well. No one was telling them to shut down the cities.
People claim we should turn down city lights all the time! Under what rock have you been living? But for city light bleed, astronomers have an alternative solution, simply place the telescope somewhere not near the cities. And yes, whenever a city tends to grow near one of those telescopes astronomers do kick up a fuss about it.
If you fill LEO with thousands of sattelites, there’s nothing astronomers can do about that.
Lol no just no… I dont know where you live but the majority of people in rural areas are not served, otherwise starlink would have never taken off and been sustainable.
I don’t know where you live, Mars perhaps?
https://nl.m.wikipedia.org/wiki/Bestand:InternetPenetrationWorldMap.svg
Clearly shows most of the Earth has internet access. Or do you think the US has no rural areas? They’re still above 90% somehow. Oh wait, I know, they must be using those mythical internet-via-sattelite services that existed well before Starlink did! I wonder where you’d find a mythical creature like the Viasat-1 for example.
Starlink took off because they promise higher speeds than some ISPs and most other sattelite companies do at lower cost, not because they’re your only option. Starlink has 3 million customers, which makes them the size of a small ISP.
Again this myth you keep spouting that the majority of the world has access is bullshit
Except for the fact that the data backs me up.
planes exist but you need to walk because you live to far from the airport is some classist bullshit.
Continuing your analogy, you propose demolishing the local university because people are entitled to fly to Ibiza, or their local supermarket. Or something, it’s not like it made much sense anyway.
You still completely failed to address the main point, that universal high-speed internet access is not critical for most of the world, certainly not for areas that have always managed perfectly fine without, and that filling up LEO is a disaster for astronomists that they don’t have a workaround for. If you’re not going to actually argue that point I think we’re done here.
Starlink doesn’t cover the globe, it’s available in the Americas, Europe and Oceania. It’s not available in most of Africa, the Middle East, India, China, Russia, Indochina. E.g. the majority of the world cannot access Starlink.
I don’t give a shit that Starlink is owned by Musk. Starlink as a company seems fine (it’s not X or anything), but I strongly dislike that their product messes with astronomy in such a major way that astronomists complain about it every chance they get.
You know that some of us are 10miles from town and considered rural? And the big Telecoms refuse to run broadband for us?
Sounds like your fight is with “big telecom” and with your local government for not putting up a good enough quote to run fiber. This isn’t an issue for large portions of the world, including rural areas, where they’ve figured out how to get them to lay fiber.
Internet access for a long time has been pushed as a priority and should be treated as a utility and that everyone should have access to it.
Access is not the same as high-speed access. Almost all of the world has some level of access, even in rural areas, through sattelites that are not in LEO. Enough to (slowly) browse, not enough to stream in HD. I don’t believe sacrificing considerable astronomical discoveries and progress is remotely worth it when feasible alternatives are available and have been used in large areas of the world already.
Morality is a product of civilisation and community. It’s the ability of groups to decide on a single set of rules by which they would lime to be treated by, as breach of those rules can cause physical or emotional harm. And then there’s simple evolution, where certain “moral rules” allowed civilisations to survive and thrive better than others.
At no point is “god” required here.