falcor84 41 minutes ago

> "The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, "revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity."

The word "inherently" there seems like a big stretch to me. I see how it could be harmful to them, but I also see an argument for how such AI generated material is a substitute for the actual CSAM. Has this actually been studied, or is it a taboo topic for policy research?

  • defrost 26 minutes ago

    There's a legally challengable assertion there; "trained on CSAM images".

    I imagine an AI image generation model could be readily trained on images of adult soldiers at war and images of children from instagram and then be used to generate imagery of children at war.

    I have zero interest in defending exploitation of children, the assertion that children had to have been exploited in order to create images of children engaged in adult activities seems shaky. *

    * FWiW I'm sure there are AI models out there that were trained on actual real world CSAM .. it's the implied neccessity that's being questioned here.

    • jsheard 16 minutes ago

      It is known that the LAION dataset underpinning foundation models like Stable Diffusion contained at least a few thousand instances of CSAM at one point. I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.

      https://www.forbes.com/sites/alexandralevine/2023/12/20/stab...

      • defrost 9 minutes ago

        > I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.

        I'd be hard pressed to prove that you definitely hadn't killed anybody ever.

        Legally if it's asserted that these images are criminal because they are the result of being the product of an LLM trained on sources that contained CSAM then the requirement would be to prove that assrtion.

        With text and speech you could prompt the model to exactly reproduce a Sarah Silverman monologue and assert that proves her content was used in the training set, etc.

        Here the defense would ask the prosecution to demonstrate how to extract a copy of original CSAM.

        But your point is well taken, it's likely most image generation programs of this nature have been fed at least one image that was borderline jailbait and likely at least one that was well below the line.

  • metalcrow 31 minutes ago

    https://en.wikipedia.org/wiki/Relationship_between_child_por... is a good starting link on this. When i last checked, there were maybe 5 studies total (imagine how hard it is to get those approved by the ethics committees), all of which found different results, some totally the opposite of each other.

    Then again, it already seems clear that violent video games do not cause violence, and access to pornography does not increase sexual violence, so this case being the opposite would be unusual.

  • willis936 32 minutes ago

    It sounds like we should be asking "why is it okay that the people training the models have CSAM?" It's not like it's legal to have, let alone distribute in your for-profit tool.

    • wbl 27 minutes ago

      Read the sentence again. It doesn't claim the data set has CSAM but that it depicts victims. It also assumes that you need AI to see an example to draw it on demand which isn't true.

  • paulryanrogers 37 minutes ago

    Benefiting from illegal acts is also a crime, even if indirect. Like getting a cheap stereo that happens to have been stolen.

    A case could also be made that the likenesses of the victims could retramatize them, especially if someone knew the connection and continued to produce similar output to taunt them.

  • ilaksh 7 minutes ago

    You probably have a point and I am not sure that these people know how image generation actually works.

    But regardless of a likely erroneous legal definition, it seems obvious that there needs to be a law in order to protect children. Because you can't tell.

    Just like there should be a law against abusing robots that look like extremely lifelike children in the future when that is possible. Or any kind of abuse of adult lifelike robots either.

    Because the behavior is too similar and it's too hard to tell the difference between real and imagined. So allowing the imaginary will lead to more of the real, sometimes without the person even knowing.

adrr 7 minutes ago

Be interesting to see how this pans out in terms of the 1st amendment. Without a victim, it would be interesting to see how the courts rule. They could say its inherently unconstitutional but for sake of the general public, it is fine. This would be similar to the supreme court ruling on DUI checkpoints.