The biggest challenge to getting an agreement over the European Union’s proposed AI Act has come from France, Germany and Italy, who favour letting makers of generativeAI models self-regulate instead of having hard rules.
Well, we saw what happened (allegedly) with OpenAI “self-regulating” itself.
Can anyone name any sector that, when left to self-regulate, has actually behaved in a responsible and constructive manner? Any company that did the right thing in the absence of regulation telling them to?
[This comment has been deleted by an automated system]
Maybe it’s different in the US, but in my country, age ratings on media aren’t the sector regulating itself. Film and TV are rated by an NGO (who issue ratings based on a list of criteria determined by government legislation), and games are rated by an organisation that is accountable to the government. So I’d consider both to be externally regulated, not self-regulated.
deleted by creator
I suspect that the games industry has managed to self-regulate its own ratings in large part because TV and film ratings are so often codified in law. It provides a baseline for what is and isn’t acceptable for certain audiences, and makes it obvious that regulation will happen one way or another. The existing TV and films ratings systems also create an environment where the consumer expects something similar for games. Regulation of visual entertainment has basically been normalised for the entire lifetime of most people alive today.
I think it also explains why the games industry has been bad at self-regulating on gambling stuff like lootboxes/etc. When a game has graphic violence or sex, it’s easy to draw a comparison with film and TV, and pre-emptively self-regulate. The gaming industry can manage that because everybody involved is familiar with film and TV - and may even have worked in that industry before, since there are many skill overlaps. But the organisations and institutes doing the ratings would seem less familiar with the gambling industry, and therefore haven’t given enough thought to how they ought to self-regulate on that. There’s a sufficient lack of self-regulation on lootboxes/etc that external regulation appears to be necessary.
And I think this ultimately highlights why AI will need external regulation. The only sector that has successfully self-regulated is one that already had a base of comparison with a separate-but-similar sector that has an existing history of regulation. AI doesn’t have anything comparable to use as a baseline. While game devs could look at the history of film and TV regulation and make a good guess as to what would happen if they didn’t regulate themselves, the AI devs think they’re untouchable.
Comic books also self-regulated for a long time.
I don’t think we can let the current big AI players regulate themselves, but the ESRB hasn’t been too bad at doing its job.
…it is now commonplace to find elements that are considered psychologically equivalent to gambling with real money in games rated E for everyone, therefore recommended for children of all ages.
ESRB may be plenty harsh on violence and sexual content, but it is completely neglecting their job where rating conditioning monetization elements accurately might earn the industry less money.
They were spawned to stop the government from regulating video game content thanks to games like Mortal Kombat and Night Trap.
Yes, I know. My point is that as new needs for self-regulation have come up, they are playing coy. Because as industry representatives it’s more profitable to pretend they don’t realize there is a new risk, that justifies ratings and warnings for children and their parents. If they will not catch up until the threat of government regulation comes up, they are not doing their job properly.
Ironically they are more harsh at fictional depictions of gambling than at lootboxes with real money, so they always knew there were some risks of this kind.
I get you.
We let Google self regulate. We let Facebook self regulate. We let Microsoft self regulate. How did that turn out again??
When profits are of concern, “self-regulation” gets defenestrated!
At this point, I think that regulations are useless. Not because these companies aren’t harmful. But because they will either convince the government that they’ll self-regulate, or they’ll use their insane profits to bribe the politicians into castrating the regulatory agencies. I’m convinced that the only way to prevent these greedy scum from harming humanity is to never let them grow that big in the first place. When these companies are big enough to control the government, they should be cut down to size with a healthy margin of safety.