Tag: Content moderation

  • What the hell is wrong with TikTok? 

    What the hell is wrong with TikTok? 

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Western governments are ticked off with TikTok. The Chinese-owned app loved by teenagers around the world is facing allegations of facilitating espionage, failing to protect personal data, and even of corrupting young minds.

    Governments in the United States, United Kingdom, Canada, New Zealand and across Europe have moved to ban the use of TikTok on officials’ phones in recent months. If hawks get their way, the app could face further restrictions. The White House has demanded that ByteDance, TikTok’s Chinese parent company, sell the app or face an outright ban in the U.S.

    But do the allegations stack up? Security officials have given few details about why they are moving against TikTok. That may be due to sensitivity around matters of national security, or it may simply indicate that there’s not much substance behind the bluster.

    TikTok’s Chief Executive Officer Shou Zi Chew will be questioned in the U.S. Congress on Thursday and can expect politicians from all sides of the spectrum to probe him on TikTok’s dangers. Here are some of the themes they may pick up on: 

    1. Chinese access to TikTok data

    Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users. 

    Western security officials have warned that ByteDance could be subject to China’s national security legislation, particularly the 2017 National Security Law that requires Chinese companies to “support, assist and cooperate” with national intelligence efforts. This law is a blank check for Chinese spy agencies, they say.

    TikTok’s user data could also be accessed by the company’s hundreds of Chinese engineers and operations staff, any one of whom could be working for the state, Western officials say. In December 2022, some ByteDance employees in China and the U.S. targeted journalists at Western media outlets using the app (and were later fired). 

    EU institutions banned their staff from having TikTok on their work phones last month. An internal email sent to staff of the European Data Protection Supervisor, seen by POLITICO, said the move aimed “to reduce the exposure of the Commission from cyberattacks because this application is collecting so much data on mobile devices that could be used to stage an attack on the Commission.” 

    And the Irish Data Protection Commission, TikTok’s lead privacy regulator in the EU, is set to decide in the next few months if the company unlawfully transferred European users’ data to China. 

    Skeptics of the security argument say that the Chinese government could simply buy troves of user data from little-regulated brokers. American social media companies like Twitter have had their own problems preserving users’ data from the prying eyes of foreign governments, they note. 

    TikTok says it has never given data to the Chinese government and would decline if asked to do so. Strictly speaking, ByteDance is incorporated in the Cayman Islands, which TikTok argues would shield it from legal obligations to assist Chinese agencies. ByteDance is owned 20 percent by its founders and Chinese investors, 60 percent by global investors, and 20 percent by employees. 

    h 56251958
    There’s little hope to completely stop European data from going to China | Alex Plavevski/EPA

    The company has unveiled two separate plans to safeguard data. In the U.S., Project Texas is a $1.5 billion plan to build a wall between the U.S. subsidiary and its Chinese owners. The €1.2 billion European version, named Project Clover, would move most of TikTok’s European data onto servers in Europe.

    Nevertheless, TikTok’s chief European lobbyist Theo Bertram also said in March that it would be “practically extremely difficult” to completely stop European data from going to China.

    2. A way in for Chinese spies

    If Chinese agencies can’t access TikTok’s data legally, they can just go in through the back door, Western officials allege. China’s cyber-spies are among the best in the world, and their job will be made easier if datasets or digital infrastructure are housed in their home territory.

    Dutch intelligence agencies have advised government officials to uninstall apps from countries waging an “offensive cyber program” against the Netherlands — including China, but also Russia, Iran and North Korea.

    Critics of the cyber espionage argument refer to a 2021 study by the University of Toronto’s Citizen Lab, which found that the app did not exhibit the “overtly malicious behavior” that would be expected of spyware. Still, the director of the lab said researchers lacked information on what happens to TikTok data held in China.

    TikTok’s Project Texas and Project Clover include steps to assuage fears of cyber espionage, as well as legal data access. The EU plan would give a European security provider (still to be determined) the power to audit cybersecurity policies and data controls, and to restrict access to some employees. Bertram said this provider could speak with European security agencies and regulators “without us [TikTok] being involved, to give confidence that there’s nothing to hide.” 

    Bertram also said the company was looking to hire more engineers outside China. 

    3. Privacy rights

    Critics of TikTok have accused the app of mass data collection, particularly in the U.S., where there are no general federal privacy rights for citizens.

    In jurisdictions that do have strict privacy laws, TikTok faces widespread allegations of failing to comply with them.

    The company is being investigated in Ireland, the U.K. and Canada over its handling of underage users’ data. Watchdogs in the Netherlands, Italy and France have also investigated its privacy practices around personalized advertising and for failing to limit children’s access to its platform. 

    TikTok has denied accusations leveled in some of the reports and argued that U.S. tech companies are collecting the same large amount of data. Meta, Amazon and others have also been given large fines for violating Europeans’ privacy.

    4. Psychological operations

    Perhaps the most serious accusation, and certainly the most legally novel one, is that TikTok is part of an all-encompassing Chinese civilizational struggle against the West. Its role: to spread disinformation and stultifying content in young Western minds, sowing division and apathy.

    Earlier this month, the director of the U.S. National Security Agency warned that Chinese control of TikTok’s algorithm could allow the government to carry out influence operations among Western populations. TikTok says it has around 300 million active users in Europe and the U.S. The app ranked as the most downloaded in 2022.

    GettyImages 1227810469
    A woman watches a video of Egyptian influencer Haneen Hossam | Khaled Desouki/AFP via Getty Images

    Reports emerged in 2019 suggesting that TikTok was censoring pro-LGBTQ content and videos mentioning Tiananmen Square. ByteDance has also been accused of pushing inane time-wasting videos to Western children, in contrast to the wholesome educational content served on its Chinese app Douyin.

    Besides accusations of deliberate “influence operations,” TikTok has also been criticized for failing to protect children from addiction to its app, dangerous viral challenges, and disinformation. The French regulator said last week that the app was still in the “very early stages” of content moderation. TikTok’s Italian headquarters was raided this week by the consumer protection regulator with the help of Italian law enforcement to investigate how the company protects children from viral challenges.

    Researchers at Citizen Lab said that TikTok doesn’t enforce obvious censorship. Other critics of this argument have pointed out that Western-owned platforms have also been manipulated by foreign countries, such as Russia’s campaign on Facebook to influence the 2016 U.S. elections. 

    TikTok says it has adapted its content moderation since 2019 and regularly releases a transparency report about what it removes. The company has also touted a “transparency center” that opened in the U.S. in July 2020 and one in Ireland in 2022. It has also said it will comply with new EU content moderation rules, the Digital Services Act, which will request that platforms give access to regulators and researchers to their algorithms and data.

    Additional reporting by Laura Kayali in Paris, Sue Allan in Ottawa, Brendan Bordelon in Washington, D.C., and Josh Sisco in San Francisco.



    [ad_2]
    #hell #wrong #TikTok
    ( With inputs from : www.politico.eu )

  • France aims to protect kids from parents oversharing pics online

    France aims to protect kids from parents oversharing pics online

    [ad_1]

    politico

    PARIS — French parents had better think twice before posting too many pictures of their offspring on social media.

    On Tuesday, members of the National Assembly’s law committee unanimously green-lit draft legislation to protect children’s rights to their own images.

    “The message to parents is that their job is to protect their children’s privacy,” Bruno Studer, an MP from President Emmanuel Macron’s party who put the bill forward, said in an interview. “On average, children have 1,300 photos of themselves circulating on social media platforms before the age of 13, before they are even allowed to have an account,” he added.

    The French president and his wife Brigitte have made child protection online a political priority. Lawmakers are also working on age-verification requirements for social media and rules to limit kids’ screen time.

    Studer, who was first elected in 2017, has made a career out of child safety online. In the past few years, he authored two groundbreaking pieces of legislation: one requiring smartphone and tablet manufacturers to give parents the option to control their children’s internet access, and another introducing legal protections for YouTube child stars.

    So-called sharenting (combining “sharing” and “parenting,” referring to posting sensitive pictures of one’s kids online) constitutes one of the main risks to children’s privacy, according to the bill’s explanatory statement. Half of the pictures shared by child sexual abusers were initially posted by parents on social media, according to reports by the National Center for Missing and Exploited Children, mentioned in the text.

    The legislation adopted on Tuesday includes protecting their children’s privacy among parents’ legal duties. Both parents would be jointly responsible for their offspring’s image rights and “shall involve the child … according to his or her age and degree of maturity.”

    In case of disagreement between parents, a judge can ban one of them from posting or sharing a child’s pictures without authorization from the other. And in the most extreme cases, parents can lose their parental authority over their kids’ image rights “if the dissemination of the child’s image by both parents seriously affects the child’s dignity or moral integrity.”

    The bill still needs to go through a plenary session next week and the Senate before it would become law.



    [ad_2]
    #France #aims #protect #kids #parents #oversharing #pics #online
    ( With inputs from : www.politico.eu )

  • Elon Musk goes to war with researchers

    Elon Musk goes to war with researchers

    [ad_1]

    musk tesla tweet trial 58619

    Press play to listen to this article

    Voiced by artificial intelligence.

    When Elon Musk bought Twitter, he promised an era of openness for the social media platform. Yet that transparency will soon come at a price.

    On Thursday, the social-networking giant will shut down free and unfettered access to reams of data on the company’s millions of users. As part of that overhaul, researchers worldwide who track misinformation and hate speech will also have their access shut down — unless they stump up the cash to keep the data tap on.

    The move is part of Musk’s efforts to make Twitter profitable amid declining advertising revenue, sluggish user growth and cut-throat competition from the likes of TikTok and Instagram.

    But the shift has riled academics, infuriated lawmakers and potentially put Twitter at odds with new content-moderation rules in the European Union that require such data access to independent researchers.

    “Shutting down or requiring paid access to the researcher API will be devastating,” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, who has spent years relying on Twitter’s API to track potentially harmful material online.

    “There are inequities in resources for researchers around the world. Scholars at Ivy League institutions in the United States could probably afford to pay,” she added. “But there are scholars all around the world who simply will not have the resources to pay anything for access to this.”

    The change would cut free access to Twitter’s so-called application program interface (API), which allowed outsiders to track what happened on the platform on a large scale. The API essentially gave outsiders direct access to the company’s data streams and was kept open to allow researchers to monitor users, including to spot harmful, fake or misleading content.

    A team at New York University, for instance, published a report last month on how far wide-reaching Russia’s interference in the 2016 U.S. presidential election had been by directly tapping into Twitter’s API system. Without that access, the level of Kremlin meddling would have been lost to history, according to Joshua Tucker, co-director at New York University’s Center for Social Media and Politics.

    Twitter did not respond to repeated requests to comment on whether this week’s change would affect academics and other independent researchers. The move still may not happen at all, depending on how Twitter tweaks its policies. The company’s development team said via a post on the social network last week it was committed to allowing others to access the platform via some form of API.

    “We’ll be back with more details on what you can expect next week,” they said.

    Yet the lack of details about who will be affected — and how much the data access will cost from February 9 — has left academics and other researchers scrambling for any details. Meanwhile, many of Twitter’s employees working on trust and safety issues have either been fired or have left the company since Musk bought Twitter for $44 billion in late October.

    In Europe’s crosshairs

    The timing of the change comes as the European Commission on Thursday will publish its first reports from social media companies, including Twitter, about how they are complying with the EU’s so-called code of practice on disinformation, a voluntary agreement between EU legislators and Big Tech firms in which these companies agree to uphold a set of principles to clamp down on such material. The code of practice includes pledges to “empower researchers” by improving their ability to access companies’ data to track online content.

    Thierry Breton, Europe’s internal market commissioner, talked to Musk last week to remind him about his obligations regarding the bloc’s content rules, though neither discussed the upcoming shutdown of free data access to the social network.

    “We cannot rely only on the assessment of the platforms themselves. If the access to researchers is getting worse, most likely that would go against the spirit of that commitment,” Věra Jourová, the European Commission’s vice president for values and transparency, told POLITICO.

    “It’s worrying to see a reversal of the trend on Twitter,” she added in reference to the likely cutback in outsiders’ access to the company’s data.

    While the bloc’s disinformation standards are not mandatory, separate content rules from Brussels, known as the Digital Services Act, also directly require social media companies to provide data access to so-called vetted researchers. By complying with the code of practice on disinformation, tech giants can ease some of their compliance obligations under those separate content-moderation rules and avoid fines of up to 6 percent of their revenues if they fall afoul of the standards.

    Yet even Twitter’s inclusion in the voluntary standards on disinformation is on shaky ground.

    The company submitted its initial report that will be published Wednesday and Musk said he was committed to complying with the rules. But Camino Rojo — who served as head of public policy for Spain and was the main person at Twitter involved in the daily work on the code since November’s mass layoffs — is no longer working at the tech giant as of last week, according to two people with direct knowledge of the matter, who spoke on the condition of anonymity to discuss internal discussions within Twitter. Rojo did not respond to a request for comment.

    American lawmakers are also trying to pass legislation that would improve researcher access to social media companies following a series of scandals. The companies’ role in fostering the January 6 Capitol Hill riots has triggered calls for tougher scrutiny, as did the so-called Facebook Files revelations from whistleblower Frances Haugen, which highlighted how difficult it remains for outsiders to understand what is happening on these platforms.

    “Twitter should be making it easier to study what’s happening on its platform, not harder,” U.S. Representative Lori Trahan, a Massachusetts Democrat, said in a statement in reference to the upcoming change to data access. “This is the latest in a series of bad moves from Twitter under Elon Musk’s leadership.”

    Rebecca Kern contributed reporting from Washington.

    This article has been updated to reflect a change in when the European Commission is expected to publish reports under the code of practice on disinformation.



    [ad_2]
    #Elon #Musk #war #researchers
    ( With inputs from : www.politico.eu )