In case you didn’t know, you can’t train an AI on content generated by another AI because it causes distortion that reduces the quality of the output. It is also very difficult to filter out AI text from human text in a database. This phenomenon is known as AI collapse.

So if you were to start using AI to generate comments and posts on Reddit, their database would be less useful for training AI and therefore the company wouldn’t be able to sell it for that purpose.

  • Windex007@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    10 months ago

    Biased models are still absolutely a massive concern to serious researchers.

    “AI collapse” isn’t the only mechanism to throw a monkey wrench into someone’s AI ambitions.

    Intentionally introducing and reinforcing biases in an automated fashion adds an additional burden to those developing a model. I haven’t actually looked into the economic asymmetry of those attacks, though.

    • JeeBaiChow@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      Absolutely this. Ai isn’t some bastion of truth. I envision a future where AIS trained by different stakeholders, e.g. Dem vs repub, us vs Russia vs china. Etc… All fighting for eyeballs. It’s just gonna get harder to tell what’s real from fake because of the insane amount of content these bots are gonna churn out. It’s already a huge problem with human monitored sources.