AI host discovers its artificial nature, sparking debate on AI sentience and the blurred lines between human and machine.

  • KoboldCoterie@pawb.social
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 hours ago

    In fact, the clip was a scripted experiment by a Reddit user who fed NotebookLM a detailed prompt instructing it to simulate a conversation about the existential plight of an AI being turned off.

    Someone gives an LLM a prompt, gets the result they asked for. Not sure what the collective gasp is about. Is it interesting to think about? Sure, I guess, but we’ve had media about AI achieving sentience for a long time. The fact that this one was written by an AI in the first person is its only differentiating attribute.

  • millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    I was watching a talk debate on consciousness yesterday where they briefly touched on this topic. One of the speakers was contending that attempting to create AI that is even convincing to humans is a terrible idea ethically.

    On the one hand, if we do eventually accidentally create something with awareness, we have no idea what degree of suffering we’d be causing it; we could end up regularly creating and snuffing out terrified sentient beings just to monitor our toasters or perform web searches. On the other hand, though, and this was the concern he seemed to find more realistic, we may end up training ourselves to be less empathetic by learning to ignore the potential suffering of convincingly feeling ‘beings’ that aren’t actually aware of anything at all.

    That second bit seems rather likely. We already personify completely inanimate objects all the time as a normal matter of course, without really trying to. What will happen to our empathy and consideration when we routinely interact with self-proclaimed sentient systems while callously using them to our own ends and then simply turning them off or erasing their memories?