an AI market that’s projected to reach $1.3 trillion by 2032.
I wish there was a way to short an entire industry., I could get rich on betting against techbro hype.
an AI market that’s projected to reach $1.3 trillion by 2032.
I wish there was a way to short an entire industry., I could get rich on betting against techbro hype.
Depends on what sort of god you are. Most are harmless, but the malicious ones get Nietzche’d.
Why does morality have to be objective to keep you from raping animals?
Then where do they come from, if there’s no objective morality.
They come from people, of course. Here’s a history lesson: https://en.wikipedia.org/wiki/Jurisprudence#History
Not true, abortion is becoming rampant because political factions are trying to change a moral fact.
False, abortion rates in the US have been in decline since the 80’s:
That’s not a simple figure to calculate, because the raises are percentage based and the people who get them don’t all make the same wage.
According to https://en.as.com/latest_news/boeing-strike-how-much-do-machinists-make-per-year-average-salary-for-company-workers-n/, the current average salary for a machinist union member at Boeing is $75,600. A 35% raise would bring that average base pay to just over $100k.
You forgot to put caveats on all the things you claim LLMs can do, but only one of them doesn’t need them.
Why would you think that LLMs can do sentiment analysis when they have no concept of context or euphemism and are wholly incapable of distinguishing sarcasm from genuine sentiment?
Why would you think that their translations are of any use given the above?
https://www.inc.com/kit-eaton/mother-of-teen-who-died-by-suicide-sues-ai-startup/90994040
Then what’s the LLM for?
LLMs arent “bad” (ignoring, of course, the massive content theft necessary to train them), but they are being wildly misused.
“Analysis” is precisely one of those misuses. Grand Theft Autocomplete can’t even count, ask it how many 'e’s are in “elephant” and you’ll get an answer anywhere from 1 to 3.
This is because they do not read or understand, they produce strings of tokens based on a statistical likelihood of what comes next. If prompted for an analysis they’ll output something that looks like an analysis, but to determine whether it is accurate or not a human has to do the work.
I think you are confused about the delineation between local and federal governments.
I am not, I simply don’t believe the delineation is relevant since taxpayers fund both the state and federal budgets.
Also, this feels like you are too capitalism-pilled
This is me being “reasonable” and working within the constraints of the system. If we aren’t going to have free universal college et al then we can at least trade some of the bloated military budget for a public works program.
People would seriously read through them for 1 day, and then they’d be like, “clear”, “clear”, “clear” without looking at half of them.
Sounds to me like a 50% improvement over zero human eyes.
It’s not like you’re gonna find and fund another group to review the first group’s work, after all.
Why not? We could hire three teams to do it simultaneously in every state in the country and the cost would still be a tiny fraction of how much was wasted on the F-35 program.
LLMs neither understand nor analyze text. They are statistical models of the text they were trained on. A map of language.
And, like any map, they should not be confused for the territory they represent.
If you admit that they have issues with facts, why would you assume that the randomly generated facts their “analysis” produces must be accurate?
So, what? They’re going to pay a human to OK the output and the whole lot of them never even gets seen?
Say 12 minutes per covenant, that’s 1 million work hours that humans could get paid for. Pay them $50 an hour and it’s $50 million. That’s nothing, less than 36 hours worth of the $12.5 Billion in weapons shipments we’ve sent to Israel in the last year. We could pay for projects like this with the rounding errors on the budget for blowing up foreign kids, and the people we pay to do it could afford to put their kids through college.
Instead, we get a project to train a robotic bigotry filter for real estate legalese and 50 more cruise missiles from the savings.
The use of LLMs instead of someone that can actually understand context.
Given the error rate of LLMs, it seems more like they wasted $258 and a week that could have been spent on a human review.
Worse, it’s in the channel’s name.
Immediate downvote for crypto BS
Site hosting should be (almost) free, because it costs (almost) nothing.
I used to work for a popular web host and 99% of the business could have been rendered obsolete by p2p hosting infrastructure.
“Joe will make sure it’s stopped.”
Like when he declared covid was over a few months ago?
Did they ever fire that cop?
The 3-letter surveillance agent from the UK:
https://www.makeuseof.com/raspberry-pi-hires-former-police-officer-for-surveillance-tech/
Reality isn’t even objective, relativity is the rule.
Also, we can’t even impose a religion’s brand of morality on its own priests, why would you pretend that doing so globally would even be possible?