A historic trial and sentence. In the UK, a 48-year-old man accused of creating more than a thousand child pornography images has been banned by the courts from using generative artificial intelligence tools for the next five years. This decision, the first of its kind, comes as associations have been warning for several months about the abuse of artificial intelligence and the proliferation of sexual abuse images generated by these tools.
In mid-April, the British government even announced the creation of a new crime, which will condemn the making of sexually explicit deepfakes involving people over 18 without their consent. Those found guilty will face legal action and an unlimited fine, or even prison if the image is later distributed.
An unprecedented decision
Without the prior agreement of the police, the 48-year-old is no longer eligible “use, visit or access” to tools such as text-image generators, which allow you to create realistic images starting from a written command, or to go to “striptease” sites, used to create deepfakes of a sexual nature.
He was also given explicit orders not to use the software Stable diffusionwhose previous hearings held at Poole Magistrates Court demonstrate that the tool has already been used by child criminals with the aim of creating hyper-realistic child sexual abuse material.
In the UK, sex offenders have long been subject to restrictions on internet use, such as bans on private browsing, access to encrypted messaging apps or deletion of internet history. However, this is the first time that restrictions have been imposed on the use of AI tools. A first that could pave the way for new ways of monitoring convicted child molesters.
It is not clear whether the 48-year-old had used artificial intelligence to produce the content that earned him a conviction or whether it was a preventive measure in the face of the growing use of such tools for (pedo)pornographic purposes.
An evolving legislative arsenal
Since the 1990s, the creation, possession and dissemination of artificial child pornography material have been prohibited by law, recalls the Caretaker.
Over the past decade these laws have been applied several times to punish crimes involving realistic images, particularly those created with Photoshop. Recent cases suggest that this law is increasingly being used to address the threat posed by sophisticated artificial content. Last year alone, six people went to court for possessing, making or sharing “pseudo-photographs”, some generated by artificial intelligence.
But it remains difficult to identify precisely how many cases involved AI images, as these are not counted separately in official data, and some fake images remain virtually indistinguishable from the real thing.
Last year, he points out Caretakera team from the Internet Watch Foundation (IWF) infiltrated a child abuse forum on the dark web and found 2,562 artificial images so lifelike they would be treated by law as if they were real.
Susie Hargreaves, director general of the IWF, told our UK colleagues that while AI-generated images of sexual abuse currently represent a percentage “relatively weak” reports, we were witnessing “slow but steady increase” cases and that some of these documents were “very realistic”. “We hope that the Prosecutor’s Office sends a clear message to those who create and distribute this type of content “.
Do you like our articles? You’ll love our newsletters! Sign up for free on this page.
Source: Madmoizelle

Mary Crossley is an author at “The Fashion Vibes”. She is a seasoned journalist who is dedicated to delivering the latest news to her readers. With a keen sense of what’s important, Mary covers a wide range of topics, from politics to lifestyle and everything in between.