Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
If a child is not being harmed, I truly do not give a shit.
The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.
And we have absolutely no data to suggest that’s happening. It’s purely hypothetical.
How do you think they train the models?
With a set of all images on the internet. Why do you people always think this is a “gotcha”?