Washington: More than 10 bogus ChatGPT apps that were created to defraud users have been blocked by Facebook, the parent company of social media behemoth Meta, according to Mashable website.
The business has found that con artists are using the public’s interest with ChatGPT, an AI-based language model, to persuade people into downloading malicious software and browser add-ons.
By taking advantage of people’s faith in ChatGPT, the cybercriminals behind these fraudulent apps are launching attacks and compromising accounts all over the internet. Once a user downloads the malicious software, the attackers can keep developing new strategies to get around security measures.
To counter this issue, Meta has identified and prevented the sharing of over 1,000 different malicious URLs on their apps. The business has also informed the file-sharing platforms where the malware was hosted about them so that they can take the necessary action as well, reported Mashable.
The prevalence of online fraud is making the internet more dangerous, and even social media behemoths like Meta are now alerting users to the risks of fake ChatGPT apps. Users must use caution and only download ChatGPT applications from reputable websites.
Maya Jones* was only 13 when she first walked through the door of Courtney’s House, a drop-in centre for victims of child sex trafficking in Washington DC. “She was so young, but she was already so broken by what she’d been through,” says Tina Frundt, the founder of Courtney’s House. Frundt, one of Washington DC’s most prominent specialists in countering child trafficking, has worked with hundreds of young people who have suffered terrible exploitation at the hands of adults, but when Maya eventually opened up about what she had been through, Frundt was shaken.
Maya told Frundt that when she was 12, she had started receiving direct messages on Instagram from a man she didn’t know.She said the man, who was 28, told her she was really pretty. According to Frundt, Maya told her that after she started chatting with the man, he asked her to send him naked photos. She told Frundt that he said he would pay her $40 for each one. He seemed kind and he kept giving Maya compliments, which made her feel special. She decided to meet him in person.
Then came his next request: “Can you help me make some money?” According to Frundt, Mayaexplained that the man asked her to pose naked for photos, and to give him her Instagram password so that he could upload the photos to her profile. Frundt says Maya told her that the man, who was now calling himself a pimp, was using her Instagram profile to advertise her for sex. Before long, sex buyers started sending direct messages to her account, wanting to make a date. Maya told Frundt that she had watched, frozen, what was taking place on her account, as the pimp negotiated prices and logistics for meetings in motels around DC. She didn’t know how to say no to this adult who had been so nice to her. Maya told Frundt that she hated having sex with these strangers but wanted to keep the pimp happy.
One morning three months after she first met the man, Frundt says that Maya was found by a passerby lying crumpled on a street in south-east DC, half-naked and confused. The night before, Maya told her, a sex buyer had taken her somewhere against her will, and she later recalled being gang-raped there for hours before being dumped on the street. “She was traumatised, and blamed herself for what happened. I had to work with her a lot to help her realise this was not her fault,” said Frundt when we visited Courtney’s House last summer.
Frundt, who has helped hundreds of children like Maya since she opened Courtney’s House in 2008, says that the first thing she now does when a young person is referred to her is to ask for their Instagram handle. Other social media platforms are also used to exploit the young people in her care, but she says Instagram is the one that comes up most often.
In the 20 years since the birth of social media, child sexual exploitation has become one of the biggest challenges facing tech companies. According to the United Nations Office on Drugs and Crime (UNODC), the internet is used by human traffickers as “digital hunting fields”, allowing them access to both customers and potential victims, with children being targeted by traffickers on social media platforms. The biggest of these, Facebook, is owned by Meta, the tech giant whose platforms, which also include Instagram, are used by more than 3 billion people worldwide. In 2020, according to a report by US-based not-for-profit the Human Trafficking Institute, Facebook was the platform most used to groom and recruit children by sex traffickers (65%), based on an analysis of 105 federal child sex trafficking cases that year. The HTI analysis ranked Instagram second most prevalent, with Snapchat third.
Grooming and child sex trafficking, though often researched and discussed together, are distinct acts. “Grooming” refers to the period of manipulation of a victim prior to their exploitation for sex or for other purposes. “Child sex trafficking” is the sexual exploitation of a child specifically as part of a commercial transaction. When the pimp was flattering and chatting with Maya, he was grooming her; when he was selling her to other adults for sex,he was trafficking.
Though people often think of “trafficking” as the movement of victims across or within borders, under international law the term refers to the use of force, fraud or coercion to obtain labour, or in the buying and selling of non-consensual sex acts, whether or not travel is involved. Because, under international law, children cannot legally consent to any kind of sex act, anyone who profits from or pays for a sex act from a child – including profiting from or paying for photographs depicting sexual exploitation – is considered a human trafficker.
Tina Frundt, the founder of Courtney’s House. Photograph: Melissa Lyttle/The Guardian
Meta has numerous policies in place to try to prevent sex trafficking on its platforms. “It’s very important to me that everything we build is safe and good for kids,” Mark Zuckerberg, Meta’s founder, wrote in a memo to staff in 2021. In a statement responding to a detailed list of the allegations in this piece, a Meta spokesperson said: “The exploitation of children is a horrific crime – we don’t allow it and we work aggressively to fight it on and off our platforms. We proactively aid law enforcement in arresting and prosecuting the criminals who perpetrate these grotesque offences. When we are made aware that a victim is in harm’s way, and we have data that could help save a life, we process an emergency request immediately.” The statement cited the group director of intelligence at the charity Stop the Traffik, who is former deputy director of the UK’s Serious Organised Crime Agency, who has said “millions are safer and traffickers are increasingly frustrated” because of their work with Meta.
But over the past two years, through interviews, survivor testimonies, US court documents and human trafficking reporting data, we have heard repeated claims that Facebook and Instagram have become major sales platforms for child trafficking. We have interviewed more than 70 sources, including survivors and their relatives, prosecutors, child protection professionals and content moderators across the US in order to understand how sex traffickers are using Facebook and Instagram, and why Meta is able to deny legal responsibility for the trafficking that takes place on its platforms.
While Meta says it is doing all it can, we have seen evidence that suggests it is failing to report or even detect the full extent of what is happening, and many of those we interviewed said they felt powerless to get the company to act.
The survivors
Courtney’s House sits on a quiet residential street on the outskirts of Washington DC. Inside, Frundt and her team have tried to make the modest two-storey house feel like a family home, with comfortable sofas and photos on the mantlepiece. Frundt, who was herself trafficked as a child in the 1980s and 90s, is now one of Washington DC’s most experienced and respected anti-trafficking advocates. Warm and ferociously protective of the children in her care, she is contracted by the city’s child protection services to identify trafficked children going through the court system, and she regularly attends court hearings for the youth in her care. She also helps train the FBI and local law enforcement sex-trafficking units on how to spot traffickers on online platforms, including Instagram. “When I was trafficked long ago I was advertised in the classified sections of freesheet newspapers,” Frundt told us. “Now my youth here are trafficked on Instagram. It’s exactly the same business model but you just don’t have to pay to place an ad.”
The children who are referred to Frundt, usually by the police or social services, have been sexually exploited and controlled: by a boyfriend, a pimp, a family member. Some of them are as young as nine. Almost without exception, they have childhoods scarred by sexual abuse, poverty and violence. This makes them perfect targets for sexual predators. “They are all looking for love and affirmation and a sense that they mean something,” said Frundt.
Almost all the young people who come to Courtney’s House are children of colour. They are, Frundt said, battling stereotypes that pressure them to become sexualised too early and make them vulnerable to traffickers. A 2017 study by the Georgetown Law Center on Poverty and Inequality found that adults typically regard Black girls as less innocent and more knowledgable about sex than their white peers. The same study showed that Black girls are often perceived to be older than they are.
Most of the time, Frundt says, the children who come to Courtney’s House are still being trafficked when they walk through the door. Even in cases where they have escaped their exploiters, she said, explicit videos and photos of them often continue to circulate online. Traffickers will lock victims out of their accounts, preventing them from taking down images posted to their profiles.
When we asked Frundt if she could show us examples of young people in her care who she says are currently being trafficked on Instagram, she pulled out her phone and scrolled through post after post of explicit images and videos of girls as young as 14 or 15. Most of the photos and videos seemed to have been taken by someone else. Frundt said that these posts were being used as a way of advertising the girls for potential sex buyers, who would send a direct message to buy explicit content or to arrange a meet up.
At one point, our conversation was interrupted by the arrival of five teenage girls. They had come back from school, and they gathered around the kitchen table, chatting and playing music on their phones while Frundt served them casserole. After they had eaten, we asked if we could talk to them about their experiences: had any of them been sexually exploited on social media or had explicit videos or pictures posted of them?
They glanced at each other and burst out laughing. Yes, they said, of course. All the time. One girl said she felt that “nobody at Instagram cares, they don’t care what’s posted. They don’t care shit about us.”
Frundt claims that she is constantly asking Instagram to close accounts and take down exploitative content of kids in her care. “I even have law enforcement calling me up asking, ‘Tina, can you get Instagram to do something?’. If Ican’t get Instagram to act, what hope is there for anyone else?”
When we put these concerns to Meta, a spokesperson said: “We take all allegations and reports of content involving children extremely seriously and have diligently responded to requests from Courtney’s House. Our ability to remove content or delete accounts requires sufficient information to determine that the content or user violates our policies.”
Frundt says that in 2020 and 2021 she had discussions with Instagram about conducting staff training to help prevent child trafficking on its platforms. She says the training didn’t go ahead as, after a long back and forth, on a video call Instagram executives said that they wouldn’t pay Frundt her standard fee of $3,000, instead allegedly offering $300. When we put this to Meta, they did not deny it.
The court documents and the prosecutors
What makes social media platforms so powerful as a tool for traffickers – far more powerful than the back pages of a newspaper in which Frundt was advertised as a teenager – is the way that they make it possible to identify and cultivate relationships with both victims and potential sex buyers. Traffickers can advertise and negotiate deals by using different features of the same platform: sellers sometimes post publicly about the girls they have available, and then switch to private direct messages to discuss prices and locations with buyers.
US court documents provide a graphic insight into how these platforms can be used. In one case prosecuted in Arizona in 2019, Mauro Veliz, a 31-year-old who was convicted of conspiracy to commit sex trafficking of a child, exchanged messages on Facebook Messenger with Miesha Tolliver, who also received jail time for sex trafficking. Tolliver told Veliz that she had one girl available for sex, and photographs of two more, before saying that the girls were aged 17, 16 and 14.
Veliz: “How much is it for all of them?”
Tolliver: “The 14 [year-old] will cost the most … a couple of hundred for her but [$] 150 for the rest”
The 14-year-old, Tolliver told Veliz, was “new to the sex game”.
Tolliver: “The 1 on the right … is 16 with a fat ass … the other [is] 15 with huge tits”
The court transcripts then state that multiple sexually explicit images of the girls were sent to Veliz.
Tolliver: “do you want me to bring 1 of the girls with me so you guys can fuck?”
[ … ]
Veliz: “is your girl nervous? Or have you told her yet?”
Tolliver: “… shes still young and doesn’t understand how ppl like it”
Tolliver and Veliz exchanged more messages, arranging for Veliz to meet the girl in a hotel in California two days later.
The final message submitted to the court was from Veliz to Tolliver. “We’re finished she’s in the restroom,” it said.
Luke Goldworm, a former assistant district attorney in Boston, Massachusetts, who has investigated and prosecuted human trafficking cases for years, says that he has encountered numerous exchanges like this one. From 2019 until he left the job in October 2022, he said, his department’s caseload of child-trafficking crimes on social media platforms increased by about 30% each year. “We’re seeing more and more people with significant criminal records move into this area. It’s incredibly lucrative,” he said. A trafficker can make up to $1,000 a night. Many of the victims he saw were just 11 or 12, he said, and most of them were Black, Latinx or LGBTQI+.
According to Goldworm, while his investigations involved every social media platform, Meta platforms were the ones he encountered most often. Six other prosecutors in several different states told us that, in their experience, Facebook and Instagram are being widely used to groom children and traffick children. Five of these prosecutors spoke of their anger over what they felt were Meta’s unnecessary delays in complying with judge-signed warrants and subpoenas needed to gather evidence on sex trafficking cases. “We get a higher rate of rejected warrants from Facebook than any other electronic service provider,” claimed Gary Ernsdorff, senior deputy prosecuting attorney for King County, Washington state. “What I find frustrating is that the exchange can delay rescuing a victim by a month.”
Three of these prosecutors described experiences where they say the company would cite technicalities, picking faults with wording and format, and slowing down investigations. In response, the company said that these claims were “false”, adding that between January and June last year, it “provided data in nearly 88% of requests from the US government”.
The responsibility for reporting
Meta acknowledges that human traffickers use its platforms, but insists that it is doing everything in its power to stop them. By law, the company is required to report any child sexual abuse imagery shared over its platforms to the National Center for Missing & Exploited Children (NCMEC), which receives federal funding to act as a nationwide clearing house for leads about child abuse. Meta is a major funder of NCMEC, and holds a seat on the company’s board.
From January to September 2022, Facebook reported more than 73.3m pieces of content under “child nudity and physical abuse” and “child sexual exploitation” and Instagram reported 6.1m. “Meta leads the industry in using the most sophisticated technology to detect both known and previously unknown child exploitation content,” said a company spokesperson. Of the 34m pieces of child sexual exploitation content removed from Facebook and Instagram in the final three months of 2022, 98% was detected by Meta itself.
But the vast majority of the content that Meta reports falls under child sexual abuse materials (CSAM) – which includes photos and videos of pornographic content – rather than sex trafficking. Unlike with child sexual abuse imagery, there is no legal requirement to report child sex trafficking, so NCMEC must rely on all social media companies to be proactive in searching for and reporting it. This legal inconsistency – the fact that child sexual abuse imagery must be reported, but reporting child sex trafficking is not legally required – is a major problem, says Staca Shehan, vice-president of the analytical services division at NCMEC. “It’s concerning across the board how little trafficking is being reported,” Shehan says. Social media companies “are prioritising what’s [legally] required”.
“I think everyone could do more,” Shehan says. “The volume of child sexual abuse material (CSAM) and volume of trafficking [being reported] is like apples and oranges.” According to Shehan, one further reason for this disparity, beyond the differing legal requirements, is technological. “Child sexual abuse material is that much easier to detect. There are so many technology tools that have been developed that allow for the automated detection of that crime.”
A NCMEC spokesperson told us that if social media companies are not reporting child sex trafficking, it allows this crime to thrive online. Reporting trafficking, they emphasised, is crucial for rescuing victims and punishing offenders.
Between 2009 and 2019, Meta reported just three cases as suspected child sex trafficking in the US to NCMEC, according to records disclosed in a subpoena request seen by the Guardian.
Meta founder Mark Zuckerberg in Washington DC in 2019. Photograph: Michael Reynolds/EPA
A spokesperson for NCMEC confirmed this figure, but clarified that a number of child trafficking cases during the same time period were reported by Meta under other “incident types”, such as child pornography or enticement. “I think one of the things to be aware of is that is that there’s sort of a singular tag that’s used for reporting,” Antigone Davis, head of global safety at Meta, emphasised to us in a recent interview. “And so just because something isn’t tagged as sex trafficking doesn’t mean that it isn’t being reported.”
A Meta spokesperson claimed that over the past decade, the company had reported “tens of thousands of accounts which violated our policies against child sex trafficking and commercial child sexual abuse material to NCMEC.” When we put these claims to NCMEC, it said that it had not received “tens of thousands” of reports of child trafficking from Meta, but had received that number related to child abuse imagery.
Hany Farid is a professor at the University of California, Berkeley who helped invent the PhotoDNA technology that Meta uses to identify harmful content. He believes Meta, which is currently valued at more than $500bn, could do more to combat child trafficking. It could, for instance, be investing more to develop better tools to “flag suspicious words and phrases on unencrypted parts of the platform – including coded language around grooming,” he said. “This is, fundamentally, not a technological problem, but one of corporate priorities.” (There is a separate debate about how to handle encryption. Meta’s plans to encrypt direct messages on Facebook Messenger and Instagram has recently drawn criticism from law enforcement agencies, including the FBI and Interpol.)
In response to Farid’s claims and further questions from the Guardian, Meta did not specify how much money it has invested in technologies to detect child sex trafficking, but said that it had “focused on using AI and machine learning on non-private, unencrypted parts of its platforms to identify harmful content and accounts and make it easier for people to report messages to the company so we can take action, including referrals to law enforcement”. Davis also emphasised that Meta constantly works with partners to improve its anti-trafficking safeguards. For instance, she mentioned that “we’ve been able to identify the kinds of searches that people do when they’re searching for trafficking content, so that when people search for that, we will pop up with information to divert them or to let them know that what they’re doing is illegal activity”.
These efforts have failed to satisfy some of Meta’s own investors. In March, several pension and investment funds that own Meta stock launched legal action against the company in Delaware over its alleged failure to act on “systemic evidence” that its platforms are facilitating sex trafficking and child sexual exploitation. By offering insufficient explanation of how it is tackling these crimes, the complaint says, the board has failed to protect the interests of the company. Meta has rejected the basis for the lawsuit. “Our goal is to prevent people who seek to exploit others from using our platform,” the company said.
The moderators
As well as software, Meta uses teams of human moderators to identify cases of child grooming and sex trafficking. Until recently, Anna Walker* worked the night shift in an office of a Meta subcontractor. She would start each shift filled with dread. “We were just, like, shoved in a dark room to look at the stuff,” she said.
Walker’s job was to review interactions between adults and children on Facebook Messenger and Instagram direct messenger that had been flagged as suspicious by Meta’s AI software. Walker claims she and her team struggled to keep pace with the huge backlog of cases. She says she saw cases of adults grooming children and then making plans to meet them for sex, as well as discussions about payment in exchange for sex.
Walker’s managers would pass on such cases to Meta to decide if action should be taken against the user. In some cases, Walker claims: “Months would pass and then the automatic bot would send me an email saying it was closing this case, because nobody’s taken action on it.” She added: “I would cry to my manager about [the children I saw] and how I want to help. But it felt like nobody would pay attention to these horrible things.”
We talked to six other moderators who worked for companies that Meta subcontracted between 2016 and 2022. All made similar claims to Walker. Their efforts to flag and escalate possible child trafficking on Meta platforms often went nowhere, they said. “On one post I reviewed, there was a picture of this girl that looked about 12, wearing the smallest lingerie you could imagine,” said one former moderator. “It listed prices for different things explicitly, like, a blowjob is this much. It was obvious that it was trafficking,” she told us. She claims that her supervisor later told her no further action had been taken in this case.
When we put these claims to Meta, a spokesperson said that moderators such as Walker do not typically get feedback on whether their flagged content has been escalated. They stressed that if a moderator does not hear back about a flagged case, that does not mean no action has been taken.
Five of the moderators claimed that it was harder to get cases escalated or content taken down if it was posted on closed Facebook groups or Facebook Messenger. Meta “would be less stringent about something taking place behind ‘closed doors’,” claimed one team leader. “With Messenger, we really couldn’t make any moves unless the language and content was really obvious. If it was four guys who trusted each other and it was in a group it could just live on for ever.” Meta said these allegations “appear to be misleading and inaccurate” and said it uses technology to find child sexualisation content in private Facebook groups and on Messenger.
Former Facebook data scientist Frances Haugen speaking at a Senate hearing on consumer protection, product safety and data security in Washington DC in 2021. Photograph: Alex Brandon/AP
In 2021, former Facebook employee and whistleblower Frances Haugen leaked internal documents that seem to support the moderators’ claims. These documents, which numbered thousands of pages, detailed how the company managed harmful content. In one memo from the Haugen leak, the company states that “Messenger groups with less than 32 people should be treated with a full expectation of privacy”.
Matias Cruz*, who worked as a content moderator from 2018 to 2020, reviewing Spanish-language posts on Facebook, believes that the criteria that Meta was using to recognise trafficking was too narrow to keep up with traffickers, who would constantly switch codewords to avoid detection. According to Cruz, traffickers would say: “‘I have this cabra [Spanish for goat] for sale,’ and it’d be some really ridiculous price. Sometimes they would just outright say [the price] for a night or two, or ‘an hour’.” It was obvious what was going on, said Cruz, but “the managers would claim it was too vague, so in the end they would just leave it up”.
Cruz and three other moderators we spoke to claimed that in examples like this, where their managers felt there was insufficient evidence to escalate the case, moderators could receive lower accuracy scores, which in turn would affect their performance assessments. “We would take negative hits on their accuracy scores to try to get some help to these people,” Cruz said.
The limits of the law
While the law requires Meta to report any child exploitation imagery detected on its platforms, the company is not legally responsible for crimes that occur on its platform, because of a law created almost three decades ago, in the early days of the internet. In 1996, the US Congress passed the Communications Decency Act, which was primarily intended to ensure online pornographic content was regulated. But section 230 of the act states that providers of “interactive computer services” – which includes the owners of social media platforms and website hosts – should not be treated as the publisher of material posted by users. This section was included in the act to ensure the free flow of information while protecting the growing tech industry from being crushed by litigation.
Whereas a newspaper, say, must legally defend what it publishes, section 230 means that a company like Meta, which hosts the content of others, may not be held liable for what appears on its platforms. Section 230 therefore positions internet service providers as fundamentally neutral: offering forums in which illegal, harmful or false content may be posted and circulated, but ultimately not responsible for that content. Since the passing of the act, tech companies such as Meta have argued successfully in courts across the US that section 230 provides them with complete immunity from prosecution for any illegal content published on their platforms, as long as they are unaware of that content’s existence.
The debate around section 230 has become highly polarised. Those who want section 230 amended say that the legal safe harbour it has provided for internet companies means they have no incentive to root out illegal content on their sites. In an op-ed published in the Wall Street Journal in January, President Biden spoke out in favour of the section’s reform. “I’ve long said we must fundamentally reform section 230,” he wrote, calling for “bipartisan action by Congress to hold big tech accountable.”
However, tech companies, along with internet freedom groups, argue that changes to section 230 could lead to censorship and an erosion of privacy, particularly for private, encrypted content. These arguments over section 230 are being put to the test in a landmark case that has reached the US supreme court, which focuses on how far YouTube can be considered culpable for the videos it recommends to its users. A ruling is due by the end of June.
The consequences
Kyle Robinson is one year into serving a 10-year sentence at a federal prison in Massachusetts for sex trafficking two teenagers, one only 14 years old. We spoke to him in January over the muffled line of the prison’s payphone, our conversation interrupted by prison staff monitoring the call. Referring to himself as a pimp, Robinson described how he sought out damaged girls from care homes and on social media as a way to make money.
Instagram, he said, was his platform of choice. “I find the girls that have pride in themselves, but maybe don’t have the confidence, the self-esteem,” he claimed. “I make her feel special. I give her validation, social skills, her ‘hotential’, if you know what I mean.”
Once he had identified his targets, Robinson claimed that he would “coach” them and advertise them on their Instagram accounts and his own. He would talk to potential buyers through direct messages, offering to send video snippets of the girls in return for “a small deposit” – about $20 – so that the buyers could see what they would be getting. If a buyer decided to meet a girl, he would pay her the rest of the money later, via CashApp, he said. Robinson would then take most of that money.
To crack down on such cases of child sexual exploitation, last June Meta announced new policies including age verification software that will require users under 18 to provide proof of age through uploading an ID, recording a video selfie, or asking mutual friends on Facebook to confirm their age. When we asked Tina Frundt about these new measures, she was sceptical. The kids she works with had already found workarounds; a 14-year-old, for example, might use a video selfie made by her 18-year-old friend, and pretend that it’s her own.
Tina Frundt in Washington DC. Photograph: Melissa Lyttle/The Guardian
Even after children have been referred to Courtney’s House, they continue to be vulnerable to traffickers. One night in June 2021, Frundt says she got a call from Maya, telling her she had arrived home safe. Frundt was relieved: she knew that Maya had spent the evening with a 43-year-old man who had been contacting her on Instagram.
Frundt says that Maya, now 15, was in a fragile state: over the previous few months, her mental health had been in sharp decline and she had told Frundt she’d been feeling suicidal. Photos and explicit videos taken by a pimp showing her having sex were being circulated and sold on Instagram. Sex buyers were contacting her relentlessly through her direct messages. “She didn’t know how to make it stop or how to say no,” Frundt recalled.
That night, on the phone, Frundt told Maya that she loved her and that they would talk in the morning. “That’s the last time I ever spoke to her,” said Frundt. The older man had given Maya drugs. When Maya’s mother went to wake her daughter the next morning, she found her dead.
A picture of Maya that still hangs on the wall of Courtney’s House shows a baby-faced teenage girl with brown curls and a huge smile. Two years after her death, Frundt continues to grieve for her caring “girly girl” who loved makeup, board games and dancing to her favourite Megan Thee Stallion songs. “Losing one of our youth, it changes you for ever. You can never forgive yourself,” she said.
Before Maya died, Frundt claims she spoke to Instagram on a video call, asking them to remove the exploitative content her trafficker had circulated. Frundt says that when Maya died, the videos of her being exploited were still on the platform.
In July 2021, a representative from an anti-trafficking organisation sent an email to Instagram’s head of youth policy, informing her of Maya’s death. Frundt was copied in on the email. It asked why Meta’s tools designed to detect grooming had not flagged a 43-year-old man contacting a young girl. Four days later, the company sent a brief reply. If Instagram was provided with details about the alleged trafficker’s account, it would investigate.
But Frundt says that it was too late. “She had already passed,” she says. “They could have done something to help her but they didn’t. She was gone.”
Names marked with an asterisk have been changed to preserve anonymity.
[ad_2]
#Facebook #Instagram #marketplaces #child #sex #trafficking
( With inputs from : www.theguardian.com )
The world’s largest social media platforms Facebook, Twitter, TikTok and others will have to crack down on illegal and harmful content or else face hefty fines under the European Union’s Digital Services Act from as early as August 25.
The European Commission today will designate 19 very large online platforms (VLOPs) and search engines that will fall under the scrutiny of the wide-ranging online content law. These firms will face strict requirements including swiftly removing illegal content, ensuring minors are not targeted with personalized ads and limiting the spread of disinformation and harmful content like cyberbullying.
“With great scale comes great responsibility,” said the EU’s Internal Market Commissioner Thierry Breton in a briefing with journalists. “As of August 25, in other words, exactly four months [from] now, online platforms and search engines with more than 45 million active users … will have stronger obligation.”
The designated companies with over 45 million users in the EU include:
— Eight social media platforms, namely Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat;
— Five online marketplaces, namely Amazon, Booking, AliExpres, Google Shopping and Zalando;
— Other platforms, including Apple and Google’s app stores, Google Maps and Wikipedia, and search engines Google and Bing.
These large platforms will have to stop displaying ads to users based on sensitive data like religion and political opinions. AI-generated content like manipulated videos and photos, known as deepfakes, will have to be labeled.
Companies will also have to conduct yearly assessments of the risks their platforms pose on a range of issues like public health, kids’ safety and freedom of expression. They will be required to lay out their measures for how they are tackling such risks. The first assessment will have to be finalized on August 25.
“These 19 very large online platforms and search engines will have to redesign completely their systems to ensure a high level of privacy, security and safety of minors with age verification and parental control tools,” said Breton.
External firms will audit their plans. The enforcement team in the Commission will access their data and algorithms to check whether they are promoting a range of harmful content — for example, content endangering public health or during elections.
Fines can go up to 6 percent of their global annual turnover and very serious cases of infringement could result in platforms facing temporary bans.
Breton said one of the first tests for large platforms in Europe will be elections in Slovakia in September because of concerns around “hybrid warfare happening on social media, especially in the context of the war in Ukraine.”
“I am particularly concerned by the content moderation system or Facebook, which is a platform, playing an important role in the opinion building for example for the Slovak society,” said Breton. “Meta needs to carefully investigate its system and fix it, where needed, ASAP.”
The Commission will also go to Twitter in the U.S. at the end of June to check whether the company is ready to comply with the DSA. “At the invitation of Elon Musk, my team and I will carry out a stress test live at Twitter’s headquarters,” added Breton.
TikTok has also asked for the Commission to check whether it will be compliant but no date has been set yet.
The Commission is also in the process of designating “four to five” additional platforms “in the next few weeks.” Porn platforms like PornHub and YouPorn have said 33 million and 7 million Europeans visit their respective websites every month — meaning they wouldn’t have to face extra requirements to tackle risks they could pose to society.
This article has been updated.
[ad_2]
#Facebook #Twitter #face #content #rules #August
( With inputs from : www.politico.eu )
Nagpur: A 27-year-old man from Maharashtra’s Nagpur hanged himself while live-streaming the act on Facebook for almost 40 minutes in the early hours of Tuesday, police said.
Krutank Siddharth Dongre, a resident of Kamptee, committed suicide while live-streaming from his wife’s Facebook account around 1.30 am, an official said.
The man was an alcoholic and unemployed. He had had a tiff with his wife, who had left him, he said.
When his family was away on Monday, Krutank consumed alcohol and logged in to his wife’s Facebook account late in the night, the official said.
While live-streaming using his cellphone, he hanged himself from a ceiling fan with a scarf, he said.
Once the video went viral, the man’s relatives, neighbours and friends gathered in front of his house and the police also reached the spot, the official said.
The police have registered an accidental death report, he added.
Days after Chief Secretary warned employees against misuse of social media, a teacher in J&K’s Ramban district has been suspended for a Facebook post allegedly critical of the government.
The suspension order to this effect has been issued by District Magistrate Ramban, Mussarat Islam.
“Pending enquiry and for violations of directions passed by the government regarding criticism of policies of the government by its employees on social media platforms, Joginder Singh, teacher GPS Chanderkote, is hereby placed under suspension with immediate effect,” the order read.
The order stated that the teacher shall remain attached to the office of the Chief Education Officer, (CEO) Ramban.
The District Magistrate has also ordered for constitution of an inquiry Committee headed by Additional District Development Commissioner, Ramban to probe the matter.
The other members of the committee include Chief Education Officer, Ramban, Zonal Education Officer, Batote and Headmaster HS Chanderkote.
“The Inquiry Committee so constituted shall initiate an indepth enquiry and submit a detailed/ comprehensive report in the matter, along with specific recommendations, by or before March 25 of 2023,” the order read.
J&K Chief Secretary, during the meeting chaired on February 17 of 2023 directed all Administrative Secretaries for monitoring of social media networks on regular basis and identify government employees criticizing or commenting adversely on government policies and achievements on various social media platforms.
“Some Facebook posts were found making rounds on social media networks on which the policies and achievements of government were criticized and after securitizing the said Facebook page(s), it has been found that the Facebook account is in the name of person Joginer Singh, who happens to be a Teacher in School Education Department, Ramban, presently posted at GPS Chanderkote, Education Zone Batote,” the order said.
It read that the report was sought from the CEO Ramban for ascertaining the facts as to whether said Joginder Singh is working as Teacher in School Education Department, District Ramban or otherwise.
“The Chief Education Officer, Ramban, sought a detailed report from the Zonal Education Officer, Batote regarding the said Facebook page of the aforementioned person and the Zonal Education Officer, Batote, submitted his detailed report to the Chief Education Officer, Ramban and the same was endorsed
to this office vide No.CEO/R/23/24775-76, dated February 22 of 2023 , by the Chief Education Officer, Ramban,” the suspension order read.
After perusal of the said report, it has been confirmed that Joginder Singh is presently working as Teacher in GPS Chanderkote and is presently deployed in MS Sawni, Zone Batote, the order reads.
“It has been revealed that his pay was kept withheld for misuse of social media in 2020 also by the Chief Education Officer, Ramban and the same was released after submission of written apology from the said Teacher,” the order informed.
“It has been found that the said teacher has posted various posts criticizing and commenting adversely about the Government policies on his Facebook page. Besides, he has concealed his identity and made fake Facebook ID with a profession as socio-political activist and not a Government Teacher,” the order read.
The order stated that after proper scrutiny of the social network account, it has been found that four Facebook pages are running on the name of the said Teacher and amongst them only in two pages the said teacher has written his designation as Teacher and in others he has written his designation as socio-political activist.
“The said teacher was transferred from Higher Secondary School Rajgarh to Government Girls Primary School, Chanderkote, during the previous ATD, but the PRIs of Panchayat Kunfer (Chanderkote) objected to his posting in GPS Chanderkote due to his doubtful character,” the order read.
New York: Indian-American judge Vince Chhabria has slapped a fine of almost $1 million on Meta, Facebook’s parent company, and its law firm for creating obstacles for court and users in a data breach trial.
According to a Bloomberg report, District Judge Chhabria wrote in an order that the fine is “loose change” for Facebook and Gibson Dunn & Crutcher LLP for deceitfully denying that it shared users’ private information with third parties.
The San Francisco judge said that Facebook relied on “delay, misdirection, and frivolous arguments” to make the litigation unfairly difficult and expensive. “Perhaps realising they had no real argument for withholding these documents, Facebook and Gibson Dunn contorted various statements” of opposing lawyers and the court acebeyond recognition,” Chhabria wrote, according to Bloomberg.
“And again, after being told repeatedly that these arguments made no sense, Facebook and Gibson Dunn insisted on pressing them,” he said.
The judge added that Facebook also attempted to push the users, who had filed a complaint against it, into settling for a lesser compensation.
The lawsuit was filed in a California court on behalf of Facebook users impacted by Meta’s partnership with research firm Cambridge Analytica.
The $925,078.51 penalty comes after Meta had agreed a $725 million settlement in December 2022 to resolve a class-action lawsuit, which claimed that Facebook illegally shared user data with Cambridge Analytica.
In March 2018, whistleblower Christopher Wylie publicly revealed that Cambridge Analytica exfiltrated personal data of 87 million Facebook users in the US in order to influence the results of the 2016 US presidential election.
This data trove included Facebook users’ ages, interests, pages they liked, groups they followed, physical locations, political and religious affiliations, relationships, and photos, as well as their full names, phone numbers, and email addresses.
“I’ll be curious to see if the Trump team runs into a similar situation,” she added.
Trump was suspended from Facebook for his role in inciting the Jan. 6 riot in early 2021. But the suspension wasn’t permanent and Meta, Facebook’s parent company, said earlier this week that it would be lifted soon.
“President Trump should have never been banned, so getting back on this platform allows the campaign access to that universe once again,” Trump campaign spokesperson Steven Cheung said in a statement. “We are getting closer to the full spectrum of building out the operation and dominating at every level, which we have already been doing based on poll numbers.”
The platform Trump is rejoining, however, is different from the one from which he was exiled. And how his team manages those changes could go a long way in determining the success of his efforts for a second term as president.
For starters, Facebook placed notable restrictions on ad targeting for political clients at the beginning of last year. And in 2021, Apple turned off ad tracking on their phones for users by default.
Those alterations represented a seismic shift for the advertising world. It also had profound impacts on political campaigns. Digital operatives from both parties say the changes have made it less valuable for campaigns to advertise on the social media behemoth.
One Republican who worked on statewide campaigns in recent cycles, who was granted anonymity to discuss internal fundraising metrics, said there was a notable dip in campaigns’ return on investment. “In 2020, [return on investment] on a really good day would be 200 percent. The minimum was 150 percent in 2020,” the operative said. “In 2022, it would be 90 percent or 80 percent. We would celebrate it when 110 [percent] came in.”
A Trump adviser close to his campaign acknowledged that the change in targeting would make Facebook less effective, but still said that that lack of access had been “a huge hindrance from a fundraising standpoint.”
“You’ve gone from an area where you’re able to be very certain about how your return on ad spend is taking effect, to a little bit more fuzzy,” said Mark Jablonowski, the president and chief technology officer of DSPolitical, a major Democratic digital ad firm. “It’s not that it doesn’t work anymore, but it definitely has made it harder to prove its efficacy.”
There was a noticeable retrenchment on political Facebook ad spending during the midterms, particularly among major Republican candidates and organizations. Statewide Republican campaigns and groups rarely cracked the list of top political spenders on the platform, even as Democratic statewide candidates still poured in money.
“Candidates struggled to raise money online” in the midterms, said Eric Wilson, a veteran GOP digital operative. “The playbook for fundraising on Facebook has changed and the Trump campaign, like any other candidate, is going to have to adapt to that. And no one has quite figured that out yet.”
Facebook, Wilson allowed, could be “more of a bronze goose now” for Trump than the golden one it once was. That may be especially true as Facebook has signaled that it would close off Trump’s access again if he were to exhibit the behavior that got him banned in the first place.
Even those GOP entities that continued to bet big on Facebook found the payoff lacking. The National Republican Senatorial Committee poured money into the platform in 2021 and early 2022 in hopes of building up a sustainable small dollar program. But that high profile bet ended up crumbling under its own weight.
Trump’s political operation also significantly scaled back its advertising on the platform during the midterms. While Trump himself was banned from Facebook, his fundraising arms were still allowed to advertise — with notable restrictions, including not posting in the voice of the former president.
But it was much more muted from when Trump was actively campaigning for higher office. Between June of last year — when his committees resumed advertising after his ban — and the launch of his campaign in mid-November, Trump’s leadership PAC Save America and affiliated fundraising committees spent over $2 million on ads on Facebook and Instagram.
By contrast, from May 2018, when Meta made political spending data public, and the Nov. 2020 election, Trump’s political operation spent over $113 million on advertising on his main Facebook page alone. That total doesn’t account for the tens of millions his presidential campaign spent on affiliated pages. His president’s political operation was the most prolific advertiser on the platform during the cycle.
Since launching his third bid for the White House, Trump’s political campaign has not spent any meaningful money on ads on Facebook and Instagram.
Few other would-be 2024 Republican candidates have spent a sizable amount on Facebook to date either. Over the last 30 days — from Dec. 25 through Jan. 23 — just two potential primary challengers to Trump have spent five figures on the platform: Nearly $62,000 for Florida Gov. Ron DeSantis, who appears to be running a significant campaign to build up his supporter list, and just over $10,000 for former Maryland Gov. Larry Hogan.
Trump’s team argued to POLITICO shortly after his launch that, given that the campaign was just beginning, “resources are better spent on other platforms and programmatically across the internet.”
But after the reinstatement, those in the former president’s orbit said they expected it to play a bigger role. “The enormity of it can’t be understated and you can talk to so many people and you can target people,” said the adviser.
“I’m not saying it’s a silver bullet,” the adviser added, stressing that “If you become too reliant on one mode of fundraising, you write your own obituary.”
Meredith McGraw and Sam Stein contributed to this report.
[ad_2]
#Facebook #cash #cow #Trump #bronze #goose
( With inputs from : www.politico.com )
San Francisco: Meta, the parent company of Facebook, has announced that it will be reinstating former US President Donald Trump’s Facebook and Instagram accounts in the coming weeks.
The announcement was made by Meta’s President of Global Affairs Meta Nick Clegg in a blog post on Wednesday.
Meta, on January 7, 2021, suspended Trump’s Facebook and Instagram accounts for two years following his praise for people engaged in violence at the Capitol on January 6, 2021.
“The suspension was an extraordinary decision taken in extraordinary circumstances. The normal state of affairs is that the public should be able to hear from a former President of the US, and a declared candidate for that office again, on our platforms,” Clegg elaborated in the blog post.
“Like any other Facebook or Instagram user, Mr Trump is subject to our Community Standards. In light of his violations, he now also faces heightened penalties for repeat offences – penalties which will apply to other public figures whose accounts are reinstated from suspensions related to civil unrest under our updated protocol,” he added.
Clegg also said that Meta is reinstating Trump’s Facebook and Instagram accounts, however, with new guardrails in place to “deter repeat offences”.
“We know that any decision we make on this issue will be fiercely criticised. Reasonable people will disagree over whether it is the right decision. But a decision had to be made, so we have tried to make it as best we can in a way that is consistent with our values and the process we established…,” he asserted.
SRINAGAR: Social media influencers and celebrities will face a fine of up to Rs 10 lakh, which can go up to Rs 50 lakh on repeat offence and even lead to a ban of up to six years, on violation of guidelines for them, which were released by the consumer affairs ministry on Friday.
Consumer Affairs Secretary Rohit Kumar Singh told mediapersons, while releasing the guidelines, that the whole issue is centred around consumers’ right.
“It is the responsibility of the endorser, celebrities and influencers or other advertisers to truthfully disclose whatever information the consumer must know before making any decision for purchase,” the guidelines said.
Singh further said social media influencers should disclose the nature of their endorsements.
“Individuals or groups who have access to an audience and the power to affect their purchasing decisions about a product, brand or service because of the influencer’s authority, knowledge, position or relationship with their audience,” the guidelines said.
Influencers are defined as creators who advertise products with a strong influence on the decisions or opinions of their audience. Virtual influencers, which are defined as fictional computer-generated people with realistic features of humans, are also required to disclose their endorsements, the guidelines said further.
The department noted that, “When there is a material connection between an advertiser and celebrity/influencer that may affect the weight or credibility of the representation made by the celebrity/influencer.”
These material connections include monetary or other forms of compensation, free products, contests and sweepstakes entries, trips or hotel stays, media barters, coverage and awards, or any personal, family or employment relationship, the rules note.
The influencers should be able to substantiate the claims made by them. The Consumer Protection Act, 2019 provides the framework for the protection of consumers against unfair trade practices and misleading advertisements.
The product and service must have been actually used or experienced by the endorser, the ministry said, adding that consumers can seek legal actions against those defaulting. (IANS)