Tag: Disinformation

  • Facebook, Twitter to face new EU content rules by August 25

    Facebook, Twitter to face new EU content rules by August 25

    [ad_1]

    facebook

    The world’s largest social media platforms Facebook, Twitter, TikTok and others will have to crack down on illegal and harmful content or else face hefty fines under the European Union’s Digital Services Act from as early as August 25.

    The European Commission today will designate 19 very large online platforms (VLOPs) and search engines that will fall under the scrutiny of the wide-ranging online content law. These firms will face strict requirements including swiftly removing illegal content, ensuring minors are not targeted with personalized ads and limiting the spread of disinformation and harmful content like cyberbullying.

    “With great scale comes great responsibility,” said the EU’s Internal Market Commissioner Thierry Breton in a briefing with journalists. “As of August 25, in other words, exactly four months [from] now, online platforms and search engines with more than 45 million active users … will have stronger obligation.”

    The designated companies with over 45 million users in the EU include:

    — Eight social media platforms, namely Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat;

    — Five online marketplaces, namely Amazon, Booking, AliExpres, Google Shopping and Zalando;

    — Other platforms, including Apple and Google’s app stores, Google Maps and Wikipedia, and search engines Google and Bing.

    These large platforms will have to stop displaying ads to users based on sensitive data like religion and political opinions. AI-generated content like manipulated videos and photos, known as deepfakes, will have to be labeled.

    Companies will also have to conduct yearly assessments of the risks their platforms pose on a range of issues like public health, kids’ safety and freedom of expression. They will be required to lay out their measures for how they are tackling such risks. The first assessment will have to be finalized on August 25. 

    “These 19 very large online platforms and search engines will have to redesign completely their systems to ensure a high level of privacy, security and safety of minors with age verification and parental control tools,” said Breton.

    External firms will audit their plans. The enforcement team in the Commission will access their data and algorithms to check whether they are promoting a range of harmful content — for example, content endangering public health or during elections.

    Fines can go up to 6 percent of their global annual turnover and very serious cases of infringement could result in platforms facing temporary bans.

    Breton said one of the first tests for large platforms in Europe will be elections in Slovakia in September because of concerns around “hybrid warfare happening on social media, especially in the context of the war in Ukraine.”

    “I am particularly concerned by the content moderation system or Facebook, which is a platform, playing an important role in the opinion building for example for the Slovak society,” said Breton. “Meta needs to carefully investigate its system and fix it, where needed, ASAP.”

    The Commission will also go to Twitter in the U.S. at the end of June to check whether the company is ready to comply with the DSA. “At the invitation of Elon Musk, my team and I will carry out a stress test live at Twitter’s headquarters,” added Breton.

    TikTok has also asked for the Commission to check whether it will be compliant but no date has been set yet. 

    The Commission is also in the process of designating “four to five” additional platforms “in the next few weeks.” Porn platforms like PornHub and YouPorn have said 33 million and 7 million Europeans visit their respective websites every month — meaning they wouldn’t have to face extra requirements to tackle risks they could pose to society.

    This article has been updated.



    [ad_2]
    #Facebook #Twitter #face #content #rules #August
    ( With inputs from : www.politico.eu )

  • Disinformation being spread to stop India’s progress towards becoming ‘vishwaguru’: Bhagwat

    Disinformation being spread to stop India’s progress towards becoming ‘vishwaguru’: Bhagwat

    [ad_1]

    Mumbai: Rashtriya Swayamsevak Sangh chief Mohan Bhagwat on Sunday said misconceptions and distorted information were being spread about India to slow down its progress towards becoming a ‘vishwaguru’.

    Speaking at a function in Mumbai, Bhagwat said such misconceptions were spread about the country post 1857 (after the First War of Independence) but such elements got a befitting reply from Swami Vivekanand.

    These misconceptions were being spread to slow down our progress as “nobody in the world can argue with us on the basis of logic,” he added.

    MS Education Academy

    “We are going to be a vishwaguru in the next 20-30 years. For that, we need to prepare at least two generations who will experience the change,” Bhagwat said.

    India had achieved a lot over the years but distorted information was being spread globally, to counter which the country needs to prepare it generations and also to attract “good people in the world towards us”, Bhagwat said.

    “Post 1857, some misconceptions were spread against us. It was Swami Vivekanand who gave a befitting reply to those who looked down upon us,” said the RSS chief.

    Subscribe us on The Siasat Daily - Google News

    [ad_2]
    #Disinformation #spread #stop #Indias #progress #vishwaguru #Bhagwat

    ( With inputs from www.siasat.com )

  • What the hell is wrong with TikTok? 

    What the hell is wrong with TikTok? 

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Western governments are ticked off with TikTok. The Chinese-owned app loved by teenagers around the world is facing allegations of facilitating espionage, failing to protect personal data, and even of corrupting young minds.

    Governments in the United States, United Kingdom, Canada, New Zealand and across Europe have moved to ban the use of TikTok on officials’ phones in recent months. If hawks get their way, the app could face further restrictions. The White House has demanded that ByteDance, TikTok’s Chinese parent company, sell the app or face an outright ban in the U.S.

    But do the allegations stack up? Security officials have given few details about why they are moving against TikTok. That may be due to sensitivity around matters of national security, or it may simply indicate that there’s not much substance behind the bluster.

    TikTok’s Chief Executive Officer Shou Zi Chew will be questioned in the U.S. Congress on Thursday and can expect politicians from all sides of the spectrum to probe him on TikTok’s dangers. Here are some of the themes they may pick up on: 

    1. Chinese access to TikTok data

    Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users. 

    Western security officials have warned that ByteDance could be subject to China’s national security legislation, particularly the 2017 National Security Law that requires Chinese companies to “support, assist and cooperate” with national intelligence efforts. This law is a blank check for Chinese spy agencies, they say.

    TikTok’s user data could also be accessed by the company’s hundreds of Chinese engineers and operations staff, any one of whom could be working for the state, Western officials say. In December 2022, some ByteDance employees in China and the U.S. targeted journalists at Western media outlets using the app (and were later fired). 

    EU institutions banned their staff from having TikTok on their work phones last month. An internal email sent to staff of the European Data Protection Supervisor, seen by POLITICO, said the move aimed “to reduce the exposure of the Commission from cyberattacks because this application is collecting so much data on mobile devices that could be used to stage an attack on the Commission.” 

    And the Irish Data Protection Commission, TikTok’s lead privacy regulator in the EU, is set to decide in the next few months if the company unlawfully transferred European users’ data to China. 

    Skeptics of the security argument say that the Chinese government could simply buy troves of user data from little-regulated brokers. American social media companies like Twitter have had their own problems preserving users’ data from the prying eyes of foreign governments, they note. 

    TikTok says it has never given data to the Chinese government and would decline if asked to do so. Strictly speaking, ByteDance is incorporated in the Cayman Islands, which TikTok argues would shield it from legal obligations to assist Chinese agencies. ByteDance is owned 20 percent by its founders and Chinese investors, 60 percent by global investors, and 20 percent by employees. 

    h 56251958
    There’s little hope to completely stop European data from going to China | Alex Plavevski/EPA

    The company has unveiled two separate plans to safeguard data. In the U.S., Project Texas is a $1.5 billion plan to build a wall between the U.S. subsidiary and its Chinese owners. The €1.2 billion European version, named Project Clover, would move most of TikTok’s European data onto servers in Europe.

    Nevertheless, TikTok’s chief European lobbyist Theo Bertram also said in March that it would be “practically extremely difficult” to completely stop European data from going to China.

    2. A way in for Chinese spies

    If Chinese agencies can’t access TikTok’s data legally, they can just go in through the back door, Western officials allege. China’s cyber-spies are among the best in the world, and their job will be made easier if datasets or digital infrastructure are housed in their home territory.

    Dutch intelligence agencies have advised government officials to uninstall apps from countries waging an “offensive cyber program” against the Netherlands — including China, but also Russia, Iran and North Korea.

    Critics of the cyber espionage argument refer to a 2021 study by the University of Toronto’s Citizen Lab, which found that the app did not exhibit the “overtly malicious behavior” that would be expected of spyware. Still, the director of the lab said researchers lacked information on what happens to TikTok data held in China.

    TikTok’s Project Texas and Project Clover include steps to assuage fears of cyber espionage, as well as legal data access. The EU plan would give a European security provider (still to be determined) the power to audit cybersecurity policies and data controls, and to restrict access to some employees. Bertram said this provider could speak with European security agencies and regulators “without us [TikTok] being involved, to give confidence that there’s nothing to hide.” 

    Bertram also said the company was looking to hire more engineers outside China. 

    3. Privacy rights

    Critics of TikTok have accused the app of mass data collection, particularly in the U.S., where there are no general federal privacy rights for citizens.

    In jurisdictions that do have strict privacy laws, TikTok faces widespread allegations of failing to comply with them.

    The company is being investigated in Ireland, the U.K. and Canada over its handling of underage users’ data. Watchdogs in the Netherlands, Italy and France have also investigated its privacy practices around personalized advertising and for failing to limit children’s access to its platform. 

    TikTok has denied accusations leveled in some of the reports and argued that U.S. tech companies are collecting the same large amount of data. Meta, Amazon and others have also been given large fines for violating Europeans’ privacy.

    4. Psychological operations

    Perhaps the most serious accusation, and certainly the most legally novel one, is that TikTok is part of an all-encompassing Chinese civilizational struggle against the West. Its role: to spread disinformation and stultifying content in young Western minds, sowing division and apathy.

    Earlier this month, the director of the U.S. National Security Agency warned that Chinese control of TikTok’s algorithm could allow the government to carry out influence operations among Western populations. TikTok says it has around 300 million active users in Europe and the U.S. The app ranked as the most downloaded in 2022.

    GettyImages 1227810469
    A woman watches a video of Egyptian influencer Haneen Hossam | Khaled Desouki/AFP via Getty Images

    Reports emerged in 2019 suggesting that TikTok was censoring pro-LGBTQ content and videos mentioning Tiananmen Square. ByteDance has also been accused of pushing inane time-wasting videos to Western children, in contrast to the wholesome educational content served on its Chinese app Douyin.

    Besides accusations of deliberate “influence operations,” TikTok has also been criticized for failing to protect children from addiction to its app, dangerous viral challenges, and disinformation. The French regulator said last week that the app was still in the “very early stages” of content moderation. TikTok’s Italian headquarters was raided this week by the consumer protection regulator with the help of Italian law enforcement to investigate how the company protects children from viral challenges.

    Researchers at Citizen Lab said that TikTok doesn’t enforce obvious censorship. Other critics of this argument have pointed out that Western-owned platforms have also been manipulated by foreign countries, such as Russia’s campaign on Facebook to influence the 2016 U.S. elections. 

    TikTok says it has adapted its content moderation since 2019 and regularly releases a transparency report about what it removes. The company has also touted a “transparency center” that opened in the U.S. in July 2020 and one in Ireland in 2022. It has also said it will comply with new EU content moderation rules, the Digital Services Act, which will request that platforms give access to regulators and researchers to their algorithms and data.

    Additional reporting by Laura Kayali in Paris, Sue Allan in Ottawa, Brendan Bordelon in Washington, D.C., and Josh Sisco in San Francisco.



    [ad_2]
    #hell #wrong #TikTok
    ( With inputs from : www.politico.eu )

  • Twitter’s plan to charge researchers for data access puts it in EU crosshairs

    Twitter’s plan to charge researchers for data access puts it in EU crosshairs

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Elon Musk pledged Twitter would abide by Europe’s new content rules — but Yevgeniy Golovchenko is not so convinced.

    The Ukrainian academic, an assistant professor at the University of Copenhagen, relies on the social network’s data to track Russian disinformation, including propaganda linked to the ongoing war in Ukraine. But that access, including to reams of tweets analyzing pro-Kremlin messaging, may soon be cut off. Or, even worse for Golovchenko, cost him potentially millions of euros a year.

    Under Musk’s leadership, Twitter is shutting down researchers’ free access to its data, though the final decision on when that will happen has yet to be made. Company officials are also offering new pay-to-play access to researchers via deals that start at $42,000 per month and can rocket up to $210,000 per month for the largest amount of data, according to Twitter’s internal presentation to academics that was shared with POLITICO.

    Yet this switch — from almost unlimited, free data access to costly monthly subscription fees — falls afoul of the European Union’s new online content rules, the Digital Services Act. Those standards, which kick in over the coming months, require the largest social networking platforms, including Twitter, to provide so-called vetted researchers free access to their data.

    It remains unclear how Twitter will meet its obligations under the 27-country bloc’s rules, which impose fines of up to 6 percent of its yearly revenue for infractions.

    “If Twitter makes access less accessible to researchers, this will hurt research on things like disinformation and misinformation,” said Golovchenko who — like many academics who spoke with POLITICO — are now in limbo until Twitter publicly decides when, or whether, it will shut down its current free data-access regime.

    It also means that “we will have fewer choices,” added the Ukrainian, acknowledging that, until now, Twitter had been more open for outsiders to poke around its data compared with the likes of Facebook or YouTube. “This means will be even more dependent on the goodwill of social media platforms.”

    Meeting EU commitments

    When POLITICO contacted Twitter for comment, the press email address sent back a poop emoji in response. A company representative did not respond to POLITICO’s questions, though executives met with EU officials and civil society groups Wednesday to discuss how Twitter would comply with Europe’s data-access obligations, according to three people with knowledge of those discussions, who were granted anonymity in order to discuss internal deliberations.

    Twitter was expected to announce details of its new paid-for data access regime last week, according to the same individuals briefed on those discussions, though no specifics about the plans were yet known. As of Friday night, no details had yet been published.

    Still, the ongoing uncertainty comes as EU regulators and policymakers have Musk in their crosshairs as the onetime world’s richest man reshapes Twitter into a free speech-focused social network. The Tesla chief executive has fired almost all of the trust, safety and policy teams in a company-wide cull of employees and has already failed to comply with some of the bloc’s new content rules that require Twitter to detail how it is tackling falsehoods and foreign interference.

    Musk has publicly stated the company will comply with the bloc’s content rules.

    “Access to platforms’ data is one of the key elements of democratic oversight of the players that control increasingly bigger part of Europe’s information space,” Věra Jourová, the European Commission vice president for values and transparency, told POLITICO in an emailed statement in reference to the EU’s code of practice on disinformation, a voluntary agreement that Twitter signed up to last year. A Commission spokesperson said such access would have to be free to approved researchers.

    h 57314716
    European Commission Vice President Věra Jourová said “Access to platforms’ data is one of the key elements of democratic oversight” | Olivier Hoslet/EPA-EFE

    “If the access to researchers is getting worse, most likely that would go against the spirit of that commitment (under Europe’s new content rules),” Jourová added. “I appeal to Twitter to find the solution and respect its commitments under the code.”

    Show me the data access

    For researchers based in the United States — who don’t fall under the EU’s new content regime — the future is even bleaker.

    Megan Brown, a senior research engineer at New York University’s Center for Social Media and Politics, which relies heavily on Twitter’s existing access, said half of her team’s 40 projects currently use the company’s data. Under Twitter’s proposed price hikes, the researchers would have to scrap their reliance on the social network via existing paid-for access through the company’s so-called Decahose API for large-scale data access, which is expected to be shut off by the end of May.

    NYU’s work via Twitter data has looked at everything from how automated bots skew conversations on social media to potential foreign interference via social media during elections. Such projects, Brown added, will not be possible when Twitter shuts down academic access to those unwilling to pay the new prices.

    “We cannot pay that amount of money,” said Brown. “I don’t know of a research center or university that can or would pay that amount of money.”

    For Rebekah Tromble, chairperson of the working group on platform-to-researcher data access at the European Digital Media Observatory, a Commission-funded group overseeing which researchers can access social media companies’ data under the bloc’s new rules, any rollback of Twitter’s data-access allowances would be against their existing commitments to give researchers greater access to its treasure trove of data.

    “If Twitter makes the choice to begin charging researchers for access, it will clearly be in violation of its commitments under the code of practice [on disinformation],” she said.

    This article has been updated.



    [ad_2]
    #Twitters #plan #charge #researchers #data #access #puts #crosshairs
    ( With inputs from : www.politico.eu )

  • MEPs cling to TikTok for Gen Z votes

    MEPs cling to TikTok for Gen Z votes

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    It may come with security risks but, for European Parliamentarians, TikTok is just too good a political tool to abandon.

    Staff at the European Parliament were ordered to delete the video-sharing application from any work devices by March 20, after an edict last month from the Parliament’s President Roberta Metsola cited cybersecurity risks about the Chinese-owned platform. The chamber also “strongly recommended” that members of the European Parliament and their political advisers give up the app.

    But with European Parliament elections scheduled for late spring 2024, the chamber’s political groups and many of its members are opting to stay on TikTok to win over the hearts and minds of the platform’s user base of young voters. TikTok says around 125 million Europeans actively use the app every month on average.

    “It’s always important in my parliamentary work to communicate beyond those who are already convinced,” said Leïla Chaibi, a French far-left lawmaker who has 3,500 TikTok followers and has previously used the tool to broadcast videos from Strasbourg explaining how the EU Parliament works.

    Malte Gallée, a 29-year-old German Greens lawmaker with over 36,000 followers on TikTok, said, “There are so many young people there but also more and more older people joining there. For me as a politician of course it’s important to be where the people that I represent are, and to know what they’re talking about.”

    Finding Gen Z 

    Parliament took its decision to ban the app from staffers’ phones in late February, in the wake of similar moves by the European Commission, Council of the EU and the bloc’s diplomatic service.

    A letter from the Parliament’s top IT official, obtained by POLITICO, said the institution took the decision after seeing similar bans by the likes of the U.S. federal government and the European Commission and to prevent “possible threats” against the Parliament and its lawmakers.

    For the chamber, it was a remarkable U-turn. Just a few months earlier its top lawmakers in the institution’s Bureau, including President Metsola and 14 vice presidents, approved the launch of an official Parliament account on TikTok, according to a “TikTok strategy” document from the Parliament’s communications directorate-general dated November 18 and seen by POLITICO. 

    “Members and political groups are increasingly opening TikTok accounts,” stated the document, pointing out that teenagers then aged 16 will be eligible to vote in 2024. “The main purpose of opening a TikTok channel for the European Parliament is to connect directly with the young generation and first time voters in the European elections in 2024, especially among Generation Z,” it said.

    Another supposed benefit of launching an official TikTok account would be countering disinformation about the war in Ukraine, the document stated.  

    Most awkwardly, the only sizeable TikTok account claiming to represent the European Parliament is actually a fake one that Parliament has asked TikTok to remove.

    Dummy phones and workarounds

    Among those who stand to lose out from the new TikTok policy are the European Parliament’s political groupings. Some of these groups have sizeable reach on the Chinese-owned app.

    GettyImages 1227810469
    All political groups with a TikTok account said they will use dedicated computers in order to skirt the TikTok ban on work devices | Khaled Desouki/AFP via Getty Images

    The largest group, the center-right European People’s Party, has 51,000 followers on TikTok. Spokesperson Pedro López previously dismissed the Parliament’s move to stop using TikTok as “absurd,” vowing the EPP’s account will stay up and active. López wrote to POLITICO that “we will use dedicated computers … only for TikTok and not connected to any EP or EPP network.”

    That’s the same strategy that all other political groups with a TikTok account — The Left, Socialists and Democrats (S&D) and Liberal Renew groups — said they will use in order to skirt the TikTok ban on work devices like phones, computers or tablets, according to spokespeople. Around 30 Renew Europe lawmakers are active on the platform, according to the group’s spokesperson.

    Beyond the groups, it’s the individual members of parliament — especially those popular on the app — that are pushing back on efforts to restrict its use.

    Clare Daly, an Irish independent member who sits with the Left group, is one of the most popular MEPs on the platform with over 370,000 subscribed to watch clips of her plenary speeches. Daly has gained some 80,000 extra followers in just the few weeks since Parliament’s ban was announced.

    Daly in an email railed against Parliament’s new policy: “This decision is not guided by a serious threat assessment. It is security theatre, more about appeasing a climate of geopolitical sinophobia in EU politics than it is about protecting sensitive information or mitigating cybersecurity threats,” she said.

    According to Moritz Körner, an MEP from the centrist Renew Europe group, cybersecurity should be a priority. “Politicians should think about cybersecurity and espionage first and before thinking about their elections to the European Parliament,” he told POLITICO, adding that he doesn’t have a TikTok account.

    Others are finding workarounds to have it both ways.

    “We will use a dummy phone and not our work phones anymore. That [dummy] phone will only be used for producing videos,” said an assistant to German Social-democrat member Delara Burkhardt, who has close to 2,000 followers. The assistant credited the platform with driving a friendlier, less abrasive political debate than other platforms like Twitter: “On TikTok the culture is nicer, we get more questions.”



    [ad_2]
    #MEPs #cling #TikTok #Gen #votes
    ( With inputs from : www.politico.eu )

  • ChatGPT broke the EU plan to regulate AI

    ChatGPT broke the EU plan to regulate AI

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Artificial intelligence’s newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.

    The chatbot dazzled the internet in past months with its rapid-fire production of human-like prose. It declared its love for a New York Times journalist. It wrote a haiku about monkeys breaking free from a laboratory. It even got to the floor of the European Parliament, where two German members gave speeches drafted by ChatGPT to highlight the need to rein in AI technology.

    But after months of internet lolz — and doomsaying from critics — the technology is now confronting European Union regulators with a puzzling question: How do we bring this thing under control?

    The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc’s draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as “high-risk,” binding developers to stricter requirements of transparency, safety and human oversight.

    The catch? ChatGPT can serve both the benign and the malignant.

    This type of AI, called a large language model, has no single intended use: People can prompt it to write songs, novels and poems, but also computer code, policy briefs, fake news reports or, as a Colombian judge has admitted, court rulings. Other models trained on images rather than text can generate everything from cartoons to false pictures of politicians, sparking disinformation fears.

    In one case, the new Bing search engine powered by ChatGPT’s technology threatened a researcher with “hack[ing]” and “ruin.” In another, an AI-powered app to transform pictures into cartoons called Lensa hypersexualized photos of Asian women.

    “These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable,” said Gary Marcus, an AI expert and vocal critic.

    These AIs “are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose,” said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.

    Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.

    The rise of ChatGPT is now forcing the European Parliament to follow suit. In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.

    The idea was met with skepticism by right-leaning political groups in the European Parliament, and even parts of Tudorache’s own Liberal group. Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament’s position, said that the amendment “would make numerous activities high-risk, that are not risky at all.”

    10292986
    The two lead Parliament lawmakers are working to impose stricter requirements on both developers and users of ChatGPT and similar AI models | Pool photo by Kenzo Tribouillard/EPA-EFE

    In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum. “It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” said Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy.

    The two lead Parliament lawmakers are also working to impose stricter requirements on both developers and users of ChatGPT and similar AI models, including managing the risk of the technology and being transparent about its workings. They are also trying to slap tougher restrictions on large service providers while keeping a lighter-tough regime for everyday users playing around with the technology.

    Professionals in sectors like education, employment, banking and law enforcement have to be aware “of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals,” Benifei said. 

    If Parliament has trouble wrapping its head around ChatGPT regulation, Brussels is bracing itself for the negotiations that will come after.

    The European Commission, EU Council and Parliament will hash out the details of a final AI Act in three-way negotiations, expected to start in April at the earliest. There, ChatGPT could well cause negotiators to hit a deadlock, as the three parties work out a common solution to the shiny new technology.

    On the sidelines, Big Tech firms — especially those with skin in the game, like Microsoft and Google — are closely watching.

    The EU’s AI Act should “maintain its focus on high-risk use cases,” said Microsoft’s Chief Responsible AI Officer Natasha Crampton, suggesting that general-purpose AI systems such as ChatGPT are hardly being used for risky activities, and instead are used mostly for drafting documents and helping with writing code.

    “We want to make sure that high-value, low-risk use cases continue to be available for Europeans,” Crampton said. (ChatGPT, created by U.S. research group OpenAI, has Microsoft as an investor and is now seen as a core element in its strategy to revive its search engine Bing. OpenAI did not respond to a request for comment.)

    A recent investigation by transparency activist group Corporate Europe Observatory also said industry actors, including Microsoft and Google, had doggedly lobbied EU policymakers to exclude general-purpose AI like ChatGPT from the obligations imposed on high-risk AI systems.

    Could the bot itself come to EU rulemakers’ rescue, perhaps?

    ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.

    “The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms,” it said.

    The EU, however, has follow-up questions.



    [ad_2]
    #ChatGPT #broke #plan #regulate
    ( With inputs from : www.politico.eu )

  • Macron lays out ‘new era’ for France’s reduced presence in Africa

    Macron lays out ‘new era’ for France’s reduced presence in Africa

    [ad_1]

    france macron africa 89677

    French President Emmanuel Macron called on Monday for his country to build “a new, balanced relationship” with Africa, as the former colonial power seeks to reduce its military presence on the continent.

    “The objective of this new era is to deploy our security presence in a partnership-based approach,” Macron said in a speech in Paris, ahead of a tour that will take him to Gabon, Angola, the Democratic Republic of Congo and Congo later this week.

    In the future, French military bases on the continent will be “co-administered” with local personnel, the French president said, while there will be a “visible decrease” in the number of French troops stationed in Africa over the next few months.

    The news comes as France has faced increasing opposition from local governments over its continued military presence in several of its former colonies, and was forced to withdraw hundreds of troops from Mali, the Central African Republic and Burkina Faso over the past year. Around 5,000 French soldiers remain stationed on various bases throughout the continent.

    But Paris’ waning influence — particularly in the Sahel region — has also allowed Russia to expand its reach in Africa, including in the digital sphere through the use of disinformation campaigns, as well as on the ground with mercenaries from the Wagner group, who in some cases have replaced French soldiers.

    The French president said his country would steer away from “anachronistic” power struggles in Africa, saying African countries should be considered as “partners,” both militarily and economically.

    “Africa isn’t [anyone’s] backyard, even less so a continent where Europeans and French should dictate its framework for development,” Macron said.



    [ad_2]
    #Macron #lays #era #Frances #reduced #presence #Africa
    ( With inputs from : www.politico.eu )

  • French broadcaster BFMTV suspends presenter amid disinformation scandal

    French broadcaster BFMTV suspends presenter amid disinformation scandal

    [ad_1]

    France’s most-watched news channel, the 24-hour BFMTV, has suspended one of its longest-serving presenters and launched an internal investigation into news packages linked to an Israeli disinformation unit calling itself “Team Jorge”.

    Rachid M’Barki, an anchor at BFMTV since its launch in 2005, is on leave and at the centre of the inquiry into multiple stories broadcast on his show, Le journal de la nuit.

    He was suspended last month, after a member of Team Jorge suggested to undercover reporters that the group was secretly behind a BFMTV news report about the Monaco yachting industry.

    The report, broadcast last year, suggested sanctions imposed against Russian oligarchs were damaging the yachting industry in the Mediterranean principality.

    When a reporter approached BFMTV to ask questions about the integrity of that package and several others broadcast by the channel, M’Barki was suspended.

    The channels said in a statement that the packages did not go through the usual editorial validation procedures.

    Team Jorge sells hacking and disinformation services to political and corporate clients who want to conduct covert influence-peddling campaigns. The team was exposed by the Guardian and an international consortium of reporters led by the French nonprofit Forbidden Stories.

    Quick Guide

    About this investigative series

    Show

    The Guardian and Observer have partnered with an international consortium of reporters to investigate global disinformation. Our project, Disinfo black ops, is exposing how false information is deliberately spread by powerful states and private operatives who sell their covert services to political campaigns, companies and wealthy individuals. It also reveals how inconvenient truths can be erased from the internet by those who are rich enough to pay. The investigation is part of Story killers, a collaboration led by Forbidden Stories, a French nonprofit whose mission is to pursue the work of assassinated, threatened or jailed reporters.

    The eight-month investigation was inspired by the work of Gauri Lankesh, a 55-year-old journalist who was shot dead outside her Bengaluru home in 2017. Hours before she was murdered, Lankesh had been putting the finishing touches on an article called In the Age of False News, which examined how so-called lie factories online were spreading disinformation in India. In the final line of the article, which was published after her death, Lankesh wrote: “I want to salute all those who expose fake news. I wish there were more of them.”

    The Story killers consortium includes more than 100 journalists from 30 media outlets including Haaretz, Le Monde, Radio France, Der Spiegel, Paper Trail Media, Die Zeit, TheMarker and the OCCRP. Read more about this project.

    Investigative journalism like this is vital for our democracy. Please consider supporting it today.

    Thank you for your feedback.

    The leader of the unit, Tal Hanan, a former Israeli special forces operative who uses the alias “Jorge”, was filmed boasting about his ability to manipulate the media to spread propaganda, by undercover reporters posing as potential clients.

    In one secretly filmed meeting, Hanan told the reporters he was able to have stories broadcast in France and then played a video clip.

    One of the undercover reporters – Frédéric Métézeau, a Middle East correspondent at Radio France – recognised the clip as a report by M’Barki broadcast on BFMTV and approached the channel about the integrity of the package last month.

    Alarm about the broadcasts escalated rapidly, leading to an internal investigation, and on 11 January M’Barki was taken off air and put on leave.

    It is not clear whether Team Jorge was behind the BFMTV news package and, if so, how they planted the stories and on behalf of whom. The news website Politico, which first reported on the internal investigation at BFMTV, said a dozen suspicious broadcast packages were now under investigation.

    Tal Hanan.
    Tal Hanan, the leader of Team Jorge, a hacking and disinformation unit. Photograph: Haaretz/The Marker/Radio France

    Do you have information about Tal Hanan or ‘Team Jorge’? For the most secure communications, use SecureDrop or see our guide.

    BFMTV confirmed the investigation in a statement on 2 February, saying: “An internal investigation has been ongoing at BFMTV for two weeks after the discovery of content broadcast on our programme, Le journal de la nuit, outside the usual validation channels. The journalist in charge of Journal de la nuit has been suspended since the opening of this investigation.”

    Marc-Olivier Fogiel, the chief executive of BFMTV, told the Forbidden Stories consortium: “At this stage, we remain cautious. But the fact remains that we are victims.”

    In a statement, BFMTV’s society of journalists (SDJ), which seeks to defend the integrity of reporting, said it had “become aware of suspicions of interference concerning a journalist from our channel”. The statement said if the details reported were correct, “they are serious and reprehensible”, and the SDJ added that it hoped the internal investigation would get to the bottom of how the packages came to be broadcast.

    In a comment to Politico, M’Barki denied any intentional misconduct. He said: “They were all real and verified. I do my job … I’m not ruling anything out, maybe I was tricked, I didn’t feel like I was or that I was participating in an operation of I don’t know what or I wouldn’t have done it.”

    Tal Hanan, the head of Team Jorge, did not respond to detailed questions about the unit’s activities and methods but said: “I deny any wrongdoing.”

    [ad_2]
    #French #broadcaster #BFMTV #suspends #presenter #disinformation #scandal
    ( With inputs from : www.theguardian.com )

  • Revealed: the hacking and disinformation team meddling in elections

    Revealed: the hacking and disinformation team meddling in elections

    [ad_1]

    A team of Israeli contractors who claim to have manipulated more than 30 elections around the world using hacking, sabotage and automated disinformation on social media has been exposed in a new investigation.

    The unit is run by Tal Hanan, a 50-year-old former Israeli special forces operative who now works privately using the pseudonym “Jorge”, and appears to have been working under the radar in elections in various countries for more than two decades.

    He is being unmasked by an international consortium of journalists. Hanan and his unit, which uses the codename “Team Jorge”, have been exposed by undercover footage and documents leaked to the Guardian.

    Hanan did not respond to detailed questions about Team Jorge’s activities and methods but said: “I deny any wrongdoing.”

    ‘Team Jorge’ unmasked: the secret disinformation team who distort reality – video

    The investigation reveals extraordinary details about how disinformation is being weaponised by Team Jorge, which runs a private service offering to covertly meddle in elections without a trace. The group also works for corporate clients.

    Hanan told the undercover reporters that his services, which others describe as “black ops”, were available to intelligence agencies, political campaigns and private companies that wanted to secretly manipulate public opinion. He said they had been used across Africa, South and Central America, the US and Europe.

    One of Team Jorge’s key services is a sophisticated software package, Advanced Impact Media Solutions, or Aims. It controls a vast army of thousands of fake social media profiles on Twitter, LinkedIn, Facebook, Telegram, Gmail, Instagram and YouTube. Some avatars even have Amazon accounts with credit cards, bitcoin wallets and Airbnb accounts.

    The consortium of journalists that investigated Team Jorge includes reporters from 30 outlets including Le Monde, Der Spiegel and El País. The project, part of a wider investigation into the disinformation industry, has been coordinated by Forbidden Stories, a French nonprofit whose mission is to pursue the work of assassinated, threatened or jailed reporters.

    Quick Guide

    About this investigative series

    Show

    The Guardian and Observer have partnered with an international consortium of reporters to investigate global disinformation. Our project, Disinfo black ops, is exposing how false information is deliberately spread by powerful states and private operatives who sell their covert services to political campaigns, companies and wealthy individuals. It also reveals how inconvenient truths can be erased from the internet by those who are rich enough to pay. The investigation is part of Story killers, a collaboration led by Forbidden Stories, a French nonprofit whose mission is to pursue the work of assassinated, threatened or jailed reporters.

    The eight-month investigation was inspired by the work of Gauri Lankesh, a 55-year-old journalist who was shot dead outside her Bengaluru home in 2017. Hours before she was murdered, Lankesh had been putting the finishing touches on an article called In the Age of False News, which examined how so-called lie factories online were spreading disinformation in India. In the final line of the article, which was published after her death, Lankesh wrote: “I want to salute all those who expose fake news. I wish there were more of them.”

    The Story killers consortium includes more than 100 journalists from 30 media outlets including Haaretz, Le Monde, Radio France, Der Spiegel, Paper Trail Media, Die Zeit, TheMarker and the OCCRP. Read more about this project.

    Investigative journalism like this is vital for our democracy. Please consider supporting it today.

    Thank you for your feedback.

    The undercover footage was filmed by three reporters, who approached Team Jorge posing as prospective clients.

    In more than six hours of secretly recorded meetings, Hanan and his team spoke of how they could gather intelligence on rivals, including by using hacking techniques to access Gmail and Telegram accounts. They boasted of planting material in legitimate news outlets, which are then amplified by the Aims bot-management software.

    Much of their strategy appeared to revolve around disrupting or sabotaging rival campaigns: the team even claimed to have sent a sex toy delivered via Amazon to the home of a politician, with the aim of giving his wife the false impression he was having an affair.

    The methods and techniques described by Team Jorge raise new challenges for big tech platforms, which have for years struggled to prevent nefarious actors spreading falsehoods or breaching the security on their platforms. Evidence of a global private market in disinformation aimed at elections will also ring alarm bells for democracies around the world.

    Tal Hanan.
    Tal Hanan and his colleagues met reporters at an office in Modi’in, about 20 miles outside Tel Aviv. Photograph: Haaretz/TheMarker/Radio France

    Do you have information about Tal Hanan or ‘Team Jorge’? For the most secure communications, use SecureDrop or see our guide.

    The Team Jorge revelations could cause embarrassment for Israel, which has come under growing diplomatic pressure in recent years over its export of cyber-weaponry that undermines democracy and human rights.

    Hanan appears to have run at least some of his disinformation operations through an Israeli company, Demoman International, which is registered on a website run by the Israeli Ministry of Defense to promote defence exports. The Israeli MoD did not respond to requests for comment.

    Given their expertise in subterfuge, it is perhaps surprising that Hanan and his colleagues allowed themselves to be exposed by undercover reporters. Journalists using conventional methods have struggled to shed light on the disinformation industry, which is at pains to avoid detection.

    The secretly filmed meetings, which took place between July and December 2022, therefore provide a rare window into the mechanics of disinformation for hire.

    Three journalists – from Radio France, Haaretz and TheMarker – approached Team Jorge pretending to be consultants working on behalf of a politically unstable African country that wanted help delaying an election.

    The encounters with Hanan and his colleagues took place via video calls and an in-person meeting in Team Jorge’s base, an unmarked office in an industrial park in Modi’in, 20 miles outside Tel Aviv.

    Hanan described his team as “graduates of government agencies”, with expertise in finance, social media and campaigns, as well as “psychological warfare”, operating from six offices around the world. Four of Hanan’s colleagues attended the meetings, including his brother, Zohar Hanan, who was described as the chief executive of the group.

    In his initial pitch to the potential clients, Hanan claimed: “We are now involved in one election in Africa … We have a team in Greece and a team in [the] Emirates … You follow the leads. [We have completed] 33 presidential-level campaigns, 27 of which were successful.” Later, he said he was involved in two “major projects” in the US but claimed not to engage directly in US politics.

    It was not possible to verify all of Team Jorge’s claims in the undercover meetings, and Hanan may have been embellishing them in order to secure a lucrative deal with prospective clients. For example, it appears Hanan may have inflated his fees when discussing the cost of his services.

    Team Jorge told the reporters they would accept payments in a variety of currencies, including cryptocurrencies such as bitcoin, or cash. He said he would charge between €6m and €15m for interference in elections.

    Quick Guide

    The undercover footage

    Show

    What is this undercover footage?

    Disinformation operatives work under the radar. To find out more about ‘Team Jorge’, an Israel-based unit selling hacking and social media manipulation services, three journalists went undercover. They posed as consultants, working on behalf of a client in a politically unstable African country who wanted to delay a forthcoming election. The reporters secretly filmed several meetings with the group’s leader, Tal Hanan, who uses the alias ‘Jorge’, and his associates between July 2022 and December 2022. 

    Who is in the footage?

    The footage captures Hanan, as well as his brother, Zohar Hanan, and other associates of Team Jorge. Faces of reporters have been blurred. The meetings took place on video calls, when Hanan and his colleagues gave slideshow demonstrations of their services, and in person, at Team Jorge’s office in an industrial park 20 miles outside Tel Aviv. 

    Who did the secret filming?

    It was secretly filmed by three reporters from media outlets working in a consortium investigating disinformation: Gur Megiddo (TheMarker), Frédéric Métézeau (Radio France) and Omer Benjakob (Haaretz). The video was then shared with more than 25 other media outlets in the consortium, including the Guardian and Observer. While the Guardian and Observer were not involved in the undercover filming, they are publishing the material because of the strong public interest justifications for doing so.

    What is Team Jorge’s response?

    Tal Hanan did not provide a detailed response to questions from the Guardian. He said: ‘To be clear, I do deny any wrongdoing.’

    Thank you for your feedback.

    However, emails leaked to the Guardian show Hanan quoting more modest fees. One suggests that in 2015 he asked for $160,000 from the now defunct British consultancy Cambridge Analytica for involvement in an eight-week campaign in a Latin American country.

    In 2017 Hanan again pitched to work for Cambridge Analytica, this time in Kenya, but was rejected by the consultancy, which said “$400,000-$600,000 per month, and substantially more for crisis response” was more than its clients would pay.

    There is no evidence that either of those campaigns went ahead. Other leaked documents, however, reveal that when Team Jorge worked covertly on the Nigerian presidential race in 2015 it did so alongside Cambridge Analytica.

    Alexander Nix, who was the chief executive of Cambridge Analytica, declined to comment in detail but added: “Your purported understanding is disputed.”

    Team Jorge also sent Nix’s political consultancy a video showcasing an early iteration of the social media disinformation software it now markets as Aims. Hanan said in an email that the tool, which enabled users to create up to 5,000 bots to deliver “mass messages” and “propaganda”, had been used in 17 elections.

    “It’s our own developed Semi-Auto Avatar creation and network deployment system,” he said, adding that it could be used in any language and was being sold as a service, although the software could be bought “if the price is right”.

    Team Jorge’s bot-management software appears to have grown significantly by 2022, according to what Hanan told the undercover reporters. He said it controlled a multinational army of more than 30,000 avatars, complete with digital backstories that stretch back years.

    Demonstrating the Aims interface, Hanan scrolled through dozens of avatars, and showed how fake profiles could be created in an instant, using tabs to choose nationality and gender and then matching profile pictures to names.

    “This is Spanish, Russian, you see Asians, Muslims. Let’s make a candidate together,” he told the undercover reporters, before settling on one image of a white woman. “Sophia Wilde, I like the name. British. Already she has email, date birth, everything.”

    Hanan was coy when asked where the photos for his avatars came from. However, the Guardian and its partners have discovered several instances in which images have been harvested from the social media accounts of real people. The photo of “Sophia Wilde”, for instance, appears to have been stolen from a Russian social media account belonging to a woman who lives in Leeds.

    The Guardian and its reporting partners tracked Aims-linked bot activity across the internet. It was behind fake social media campaigns, mostly involving commercial disputes, in about 20 countries including the UK, US, Canada, Germany, Switzerland, Mexico, Senegal, India and the United Arab Emirates.

    This week Meta, the owner of Facebook, took down Aims-linked bots on its platform after reporters shared a sample of the fake accounts with the company. On Tuesday, a Meta spokesperson connected the Aims bots to others that were linked in 2019 to another, now-defunct Israeli firm which it banned from the platform.

    “This latest activity is an attempt by some of the same individuals to come back and we removed them for violating our policies,” the spokesperson said. “The group’s latest activity appears to have centred around running fake petitions on the internet or seeding fabricated stories in mainstream media outlets.”

    In addition to Aims, Hanan told reporters about his “blogger machine” – an automated system for creating websites that the Aims-controlled social media profiles could then use to spread fake news stories across the internet. “After you’ve created credibility, what do you do? Then you can manipulate,” he said.

    ‘I will show you how safe Telegram is’

    No less alarming were Hanan’s demonstrations of his team’s hacking capabilities, in which he showed the reporters how he could penetrate Telegram and Gmail accounts. In one case, he brought up on screen the Gmail account of a man described as the “assistant of an important guy” in the general election in Kenya, which was days away.

    “Today if someone has a Gmail, it means they have much more than just email,” Hanan said as he clicked through the target’s emails, draft folders, contacts and drives. He then showed how he claimed to be able to access accounts on Telegram, an encrypted messaging app.

    Tal Hanan.
    Tal Hanan. Photograph: Source: Haaretz/TheMarker/Radio France

    One of the Telegram accounts he claimed to penetrate belonged to a person in Indonesia, while the other two appeared to belong to Kenyans involved in the ongoing general election, and close to the then candidate William Ruto, who ended up winning the presidency.

    “I know in some countries they believe Telegram is safe. I will show you how safe it is,” he said, before showing a screen in which he appeared to scroll through the Telegram contacts of one Kenyan strategist who was working for Ruto at the time.

    Hanan then demonstrated how access to Telegram could be manipulated to sow mischief.

    Typing the words “hello how are you dear”, Hanan appeared to send a message from the Kenyan strategist’s account to one of their contacts. “I’m not just watching,” Hanan boasted, before explaining how manipulating the messaging app to send messages could be used to create chaos in a rival’s election campaign.

    “One of the biggest thing is to put sticks between the right people, you understand,” he said. “And I can write him what I think about his wife, or what I think about his last speech, or I can tell him that I promised him to be my next chief of staff, OK?”

    Hanan then showed how – once the message had been read – he could “delete” it to cover his tracks. But when Hanan repeated that trick, hacking into the Telegram account of the second close adviser to Ruto, he made a mistake.

    After sending an innocuous Telegram message consisting only of the number “11” to one of the hacking victim’s contacts, he failed to properly delete it.

    Team Jorge demonstration of live infiltration of Telegram. Screenshot showing message
    Hanan sent a Telegram message consisting only of the number 11 to one of the hacking victim’s contacts. Photograph: Haaretz/TheMarker/Radio France

    A reporter in the consortium was later able to track down the recipient of that message and was granted permission to check the person’s phone. The “11” message was still visible on their Telegram account, providing evidence that Team Jorge’s infiltration of the account was genuine.

    Hanan suggested to the undercover reporters that some of his hacking methods exploited vulnerabilities in the global signalling telecoms system, SS7, which for decades has been regarded by experts as a weak spot in the telecoms network.

    Google, which runs the Gmail service, declined to comment. Telegram said “the problem of SS7 vulnerabilities” was widely known and “not unique to Telegram”. They added: “Accounts on any massively popular social media network or messaging app can be vulnerable to hacking or impersonation unless users follow security recommendations and take proper precautions to keep their accounts secure.”

    Hanan did not respond to detailed requests for comment, claiming that he needed “approval” from an unspecified authority before doing so. However, he added: “To be clear, I deny any wrongdoing.”

    Zohar Hanan, his brother and business partner, added: “I have been working all my life according to the law!”

    [ad_2]
    #Revealed #hacking #disinformation #team #meddling #elections
    ( With inputs from : www.theguardian.com )

  • Elon Musk goes to war with researchers

    Elon Musk goes to war with researchers

    [ad_1]

    musk tesla tweet trial 58619

    Press play to listen to this article

    Voiced by artificial intelligence.

    When Elon Musk bought Twitter, he promised an era of openness for the social media platform. Yet that transparency will soon come at a price.

    On Thursday, the social-networking giant will shut down free and unfettered access to reams of data on the company’s millions of users. As part of that overhaul, researchers worldwide who track misinformation and hate speech will also have their access shut down — unless they stump up the cash to keep the data tap on.

    The move is part of Musk’s efforts to make Twitter profitable amid declining advertising revenue, sluggish user growth and cut-throat competition from the likes of TikTok and Instagram.

    But the shift has riled academics, infuriated lawmakers and potentially put Twitter at odds with new content-moderation rules in the European Union that require such data access to independent researchers.

    “Shutting down or requiring paid access to the researcher API will be devastating,” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, who has spent years relying on Twitter’s API to track potentially harmful material online.

    “There are inequities in resources for researchers around the world. Scholars at Ivy League institutions in the United States could probably afford to pay,” she added. “But there are scholars all around the world who simply will not have the resources to pay anything for access to this.”

    The change would cut free access to Twitter’s so-called application program interface (API), which allowed outsiders to track what happened on the platform on a large scale. The API essentially gave outsiders direct access to the company’s data streams and was kept open to allow researchers to monitor users, including to spot harmful, fake or misleading content.

    A team at New York University, for instance, published a report last month on how far wide-reaching Russia’s interference in the 2016 U.S. presidential election had been by directly tapping into Twitter’s API system. Without that access, the level of Kremlin meddling would have been lost to history, according to Joshua Tucker, co-director at New York University’s Center for Social Media and Politics.

    Twitter did not respond to repeated requests to comment on whether this week’s change would affect academics and other independent researchers. The move still may not happen at all, depending on how Twitter tweaks its policies. The company’s development team said via a post on the social network last week it was committed to allowing others to access the platform via some form of API.

    “We’ll be back with more details on what you can expect next week,” they said.

    Yet the lack of details about who will be affected — and how much the data access will cost from February 9 — has left academics and other researchers scrambling for any details. Meanwhile, many of Twitter’s employees working on trust and safety issues have either been fired or have left the company since Musk bought Twitter for $44 billion in late October.

    In Europe’s crosshairs

    The timing of the change comes as the European Commission on Thursday will publish its first reports from social media companies, including Twitter, about how they are complying with the EU’s so-called code of practice on disinformation, a voluntary agreement between EU legislators and Big Tech firms in which these companies agree to uphold a set of principles to clamp down on such material. The code of practice includes pledges to “empower researchers” by improving their ability to access companies’ data to track online content.

    Thierry Breton, Europe’s internal market commissioner, talked to Musk last week to remind him about his obligations regarding the bloc’s content rules, though neither discussed the upcoming shutdown of free data access to the social network.

    “We cannot rely only on the assessment of the platforms themselves. If the access to researchers is getting worse, most likely that would go against the spirit of that commitment,” Věra Jourová, the European Commission’s vice president for values and transparency, told POLITICO.

    “It’s worrying to see a reversal of the trend on Twitter,” she added in reference to the likely cutback in outsiders’ access to the company’s data.

    While the bloc’s disinformation standards are not mandatory, separate content rules from Brussels, known as the Digital Services Act, also directly require social media companies to provide data access to so-called vetted researchers. By complying with the code of practice on disinformation, tech giants can ease some of their compliance obligations under those separate content-moderation rules and avoid fines of up to 6 percent of their revenues if they fall afoul of the standards.

    Yet even Twitter’s inclusion in the voluntary standards on disinformation is on shaky ground.

    The company submitted its initial report that will be published Wednesday and Musk said he was committed to complying with the rules. But Camino Rojo — who served as head of public policy for Spain and was the main person at Twitter involved in the daily work on the code since November’s mass layoffs — is no longer working at the tech giant as of last week, according to two people with direct knowledge of the matter, who spoke on the condition of anonymity to discuss internal discussions within Twitter. Rojo did not respond to a request for comment.

    American lawmakers are also trying to pass legislation that would improve researcher access to social media companies following a series of scandals. The companies’ role in fostering the January 6 Capitol Hill riots has triggered calls for tougher scrutiny, as did the so-called Facebook Files revelations from whistleblower Frances Haugen, which highlighted how difficult it remains for outsiders to understand what is happening on these platforms.

    “Twitter should be making it easier to study what’s happening on its platform, not harder,” U.S. Representative Lori Trahan, a Massachusetts Democrat, said in a statement in reference to the upcoming change to data access. “This is the latest in a series of bad moves from Twitter under Elon Musk’s leadership.”

    Rebecca Kern contributed reporting from Washington.

    This article has been updated to reflect a change in when the European Commission is expected to publish reports under the code of practice on disinformation.



    [ad_2]
    #Elon #Musk #war #researchers
    ( With inputs from : www.politico.eu )