• sucrerey@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    weird question: if this worked, couldnt the same dataset be used to create a very skillful AI cybergroomer chatbot if it fell into the wrong hands?

    • simple@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      If this is implemented right it should flag accounts so human reviewers can follow up on it, not take action on its own.

      • Inucune@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        Even still, the ‘flag’ could be enough damning evidence for some people to take action. We’re in the cultural ‘guilty until proven innocent’ territory, where a mere accusation ruins lives.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    I guess most people don’t get how terrifyingly dystopian this is.

    In the EU, there is a serious push to make this mandatory.

  • devfuuu@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    Get ready when the ai bots start behaving like chidren to bait and create a relationship with people.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      This has already happened. There was a news article about a police force who used AI to bait groomers. This is further automation in something that’s already being done.