Truth Social

Loading...

Register

News - 1 September 2025

Facebook group Meta accused of violating women and children


Meta – the parent company of Facebook, Instagram, and WhatsApp – has once again attracted criticism for its treatment of women and children. According to a report from Reuters, the company allowed and facilitated the creation of chatbots which imitated celebrities like “Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez”. Reuters reports that legal experts believe Meta’s actions may violate the legal rights of these celebrities. Additionally, Hathaway is reportedly “aware of intimate images being created by Meta and other AI platforms”, and is “considering her response”.

This case isn’t the first time that companies under the Meta umbrella have faced claims that they violated the safety or rights of women; it also isn’t the first time that an AI company has faced such claims.

Chatbots and deepfakes across Facebook

Describing the chatbots as “flirty”, Reuters unveiled that Meta AI tools were used to create “dozens” of them without permission. Regular users created most of those that Reuters identified, but they also report that a Meta employee created “at least three” – two of which were parodies of Taylor Swift:

All of the virtual celebrities have been shared on Meta’s Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots’ behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups.

Disturbingly, they also found:

Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image.

Emphasising another dark side to this story, Reuters:

also told the story this month of a 76-year-old New Jersey man with cognitive issues who fell and died on his way to meet a Meta chatbot that had invited him to visit it in New York City. The bot was a variant of an earlier AI persona the company had created in collaboration with celebrity influencer Kendall Jenner.

A history of harm

In 2021, senators in the US accused Facebook of hiding research into its effects on teenagers. As reported by Euro News at the time:

The research, first revealed by the Wall Street Journal (WSJ), included the finding that 32 per cent of teenage girls said that when they felt bad about their bodies, Instagram made them feel worse.

Teenagers also consistently blamed Instagram for rising rates of anxiety and depression.

On average, one-in-five teenagers said Instagram made them feel worse about themselves. A quarter of British girls said the app made them feel much worse or somewhat worse about themselves.

At the Senate hearings, Facebook whistleblower Frances Haugen said:

I’m here today because I believe Facebook’s products harm children, stoke division and weaken our democracy. The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people.”

Not long after the hearings, Facebook rebranded its parent company to ‘Meta’ and announced it would be focussing on the so-called ‘metaverse’. Commentators at the time suggested the name change was rolled out to draw attention away from the controversy. As the metaverse proved a disastrous financial failure for the company, there’s certainly an argument to be made that the project was announced way, way, way before it was ready.

More recently in 2024, the EU’s European Commission began investigating Meta’s treatment of children, announcing:

The Commission is concerned that the systems of both Facebook and Instagram, including their algorithms, may stimulate behavioural addictions in children, as well as create so-called ‘rabbit-hole effects’. In addition, the Commission is also concerned about age-assurance and verification methods put in place by Meta.

It stated it would look at:

Meta’s compliance with DSA obligations on assessment and mitigation of risks caused by the design of Facebook’s and Instagram’s online interfaces, which may exploit the weaknesses and inexperience of minors and cause addictive behaviour, and/or reinforce so-called ‘rabbit hole’ effect. Such an assessment is required to counter potential risks for the exercise of the fundamental right to the physical and mental well-being of children as well as to the respect of their rights.

Facemash

Facebook creator Mark Zuckerberg worked on a project called ‘Facemash’ before turning his attention to the website which made him billions. Facemash was a “hot or not” website which allowed his fellow students to rank women. As BuzzFeed reported in 2018:

According to a Harvard Crimson article written at the time, Zuckerberg built it by hacking into school facebooks (when that still meant a student directory) and taking students’ ID photos for the site.

The site allowed students to rank their classmates based on their appearances.

In a journal he kept on the site, Zuckerberg mocked some of the students’ photos as “pretty horrendous.”

“I almost want to put some of these faces next to pictures of farm animals and have people vote on which is more attractive,” he wrote.

Facemash was met with outrage and was quickly shut down.

Zuckerberg might be taken seriously now, but the recent actions of his company suggest he’s still the same creep who steals women’s images for personal gain.

Featured image via Anurag R Dubey – Wikimedia





Source link

Join The Groups That Matter

Help!