The fake images of Taylor Swift thatin late January, likely began with a chat room challenge aimed at bypassing filters intended to prevent people from creating pornography with artificial intelligence, according to a new study.
The images of the pop star can be traced to a forum on 4chan, an online image posting forum with a history of sharing conspiracy theories, hate speech and other controversial content, according to the report. Graphika, a social media analytics company.
The 4chan users who created the images of Swift did so as a sort of “game” to see if they could create obscene and sometimes violent visuals of famous women, from singers to politicians, said Graphika. The company detected a thread on 4chan that encouraged users to try to circumvent guardrails established by AI-based image generation tools, including OpenAI’s DALL-E, Microsoft Designer, and Bing Image Creator.
“Although Taylor Swift’s viral pornographic images have brought mainstream attention to the issue of non-consensual AI-generated intimate images, she is far from the only victim,” said Cristina Lopez G., senior analyst at Graphika, in a press release accompanying the film. report. “In the 4chan community where these images originated, she is not even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to schoolchildren.”
OpenAI said Swift’s explicit images were not generated using ChatGPT or its application programming interface.
“We strive to filter out the most explicit content when training the underlying DALL-E model and apply additional security guardrails for our products like ChatGPT, including refusing requests that ask for the name of the user. “a public figure or refusing requests for explicit content,” OpenAI said.
Microsoft continues to investigate the images and has strengthened its “existing security systems to further prevent our services from being misused to help generate images like these,” according to a spokesperson.
4chan did not respond to a request for comment.
Swift’s fake images quickly spread to other platforms, attracting millions of views and prompting X (formerly Twitter) tofor several days.
The mega star’s devoted fans quickly launched a counter-offensive on the platform formerly known as Twitter, flooding the social media site with a #ProtectTaylorSwift hashtag amid more positive images of the pop star.
The Screen Actors Guild called Swift’s images “upsetting, harmful and deeply concerning,” adding that “the development and dissemination of false images — especially those of an obscene nature — without someone’s consent must be made illegal “.
Fake pornography made with software has existed for years, with scattered regulations that leave those affected with little recourse, legal or otherwise, to have the images removed. But the advent of so-called generative AI tools has fueled the creation and distribution of “deepfake” pornographic images, particularly of celebrities.
Artificial intelligence is also being used to target celebrities in other ways. In January,featuring the image of Swift endorsing a fake Le Creuset cookware giveaway has been making the rounds online. Le Creuset apologized to those who may have been misled.
Note: The content and images used in this article is rewritten and sourced from www.cbsnews.com