Tag: Thierry Breton

  • Facebook, Twitter to face new EU content rules by August 25

    Facebook, Twitter to face new EU content rules by August 25

    [ad_1]

    facebook

    The world’s largest social media platforms Facebook, Twitter, TikTok and others will have to crack down on illegal and harmful content or else face hefty fines under the European Union’s Digital Services Act from as early as August 25.

    The European Commission today will designate 19 very large online platforms (VLOPs) and search engines that will fall under the scrutiny of the wide-ranging online content law. These firms will face strict requirements including swiftly removing illegal content, ensuring minors are not targeted with personalized ads and limiting the spread of disinformation and harmful content like cyberbullying.

    “With great scale comes great responsibility,” said the EU’s Internal Market Commissioner Thierry Breton in a briefing with journalists. “As of August 25, in other words, exactly four months [from] now, online platforms and search engines with more than 45 million active users … will have stronger obligation.”

    The designated companies with over 45 million users in the EU include:

    — Eight social media platforms, namely Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat;

    — Five online marketplaces, namely Amazon, Booking, AliExpres, Google Shopping and Zalando;

    — Other platforms, including Apple and Google’s app stores, Google Maps and Wikipedia, and search engines Google and Bing.

    These large platforms will have to stop displaying ads to users based on sensitive data like religion and political opinions. AI-generated content like manipulated videos and photos, known as deepfakes, will have to be labeled.

    Companies will also have to conduct yearly assessments of the risks their platforms pose on a range of issues like public health, kids’ safety and freedom of expression. They will be required to lay out their measures for how they are tackling such risks. The first assessment will have to be finalized on August 25. 

    “These 19 very large online platforms and search engines will have to redesign completely their systems to ensure a high level of privacy, security and safety of minors with age verification and parental control tools,” said Breton.

    External firms will audit their plans. The enforcement team in the Commission will access their data and algorithms to check whether they are promoting a range of harmful content — for example, content endangering public health or during elections.

    Fines can go up to 6 percent of their global annual turnover and very serious cases of infringement could result in platforms facing temporary bans.

    Breton said one of the first tests for large platforms in Europe will be elections in Slovakia in September because of concerns around “hybrid warfare happening on social media, especially in the context of the war in Ukraine.”

    “I am particularly concerned by the content moderation system or Facebook, which is a platform, playing an important role in the opinion building for example for the Slovak society,” said Breton. “Meta needs to carefully investigate its system and fix it, where needed, ASAP.”

    The Commission will also go to Twitter in the U.S. at the end of June to check whether the company is ready to comply with the DSA. “At the invitation of Elon Musk, my team and I will carry out a stress test live at Twitter’s headquarters,” added Breton.

    TikTok has also asked for the Commission to check whether it will be compliant but no date has been set yet. 

    The Commission is also in the process of designating “four to five” additional platforms “in the next few weeks.” Porn platforms like PornHub and YouPorn have said 33 million and 7 million Europeans visit their respective websites every month — meaning they wouldn’t have to face extra requirements to tackle risks they could pose to society.

    This article has been updated.



    [ad_2]
    #Facebook #Twitter #face #content #rules #August
    ( With inputs from : www.politico.eu )

  • ChatGPT broke the EU plan to regulate AI

    ChatGPT broke the EU plan to regulate AI

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Artificial intelligence’s newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.

    The chatbot dazzled the internet in past months with its rapid-fire production of human-like prose. It declared its love for a New York Times journalist. It wrote a haiku about monkeys breaking free from a laboratory. It even got to the floor of the European Parliament, where two German members gave speeches drafted by ChatGPT to highlight the need to rein in AI technology.

    But after months of internet lolz — and doomsaying from critics — the technology is now confronting European Union regulators with a puzzling question: How do we bring this thing under control?

    The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc’s draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as “high-risk,” binding developers to stricter requirements of transparency, safety and human oversight.

    The catch? ChatGPT can serve both the benign and the malignant.

    This type of AI, called a large language model, has no single intended use: People can prompt it to write songs, novels and poems, but also computer code, policy briefs, fake news reports or, as a Colombian judge has admitted, court rulings. Other models trained on images rather than text can generate everything from cartoons to false pictures of politicians, sparking disinformation fears.

    In one case, the new Bing search engine powered by ChatGPT’s technology threatened a researcher with “hack[ing]” and “ruin.” In another, an AI-powered app to transform pictures into cartoons called Lensa hypersexualized photos of Asian women.

    “These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable,” said Gary Marcus, an AI expert and vocal critic.

    These AIs “are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose,” said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.

    Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.

    The rise of ChatGPT is now forcing the European Parliament to follow suit. In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.

    The idea was met with skepticism by right-leaning political groups in the European Parliament, and even parts of Tudorache’s own Liberal group. Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament’s position, said that the amendment “would make numerous activities high-risk, that are not risky at all.”

    10292986
    The two lead Parliament lawmakers are working to impose stricter requirements on both developers and users of ChatGPT and similar AI models | Pool photo by Kenzo Tribouillard/EPA-EFE

    In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum. “It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” said Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy.

    The two lead Parliament lawmakers are also working to impose stricter requirements on both developers and users of ChatGPT and similar AI models, including managing the risk of the technology and being transparent about its workings. They are also trying to slap tougher restrictions on large service providers while keeping a lighter-tough regime for everyday users playing around with the technology.

    Professionals in sectors like education, employment, banking and law enforcement have to be aware “of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals,” Benifei said. 

    If Parliament has trouble wrapping its head around ChatGPT regulation, Brussels is bracing itself for the negotiations that will come after.

    The European Commission, EU Council and Parliament will hash out the details of a final AI Act in three-way negotiations, expected to start in April at the earliest. There, ChatGPT could well cause negotiators to hit a deadlock, as the three parties work out a common solution to the shiny new technology.

    On the sidelines, Big Tech firms — especially those with skin in the game, like Microsoft and Google — are closely watching.

    The EU’s AI Act should “maintain its focus on high-risk use cases,” said Microsoft’s Chief Responsible AI Officer Natasha Crampton, suggesting that general-purpose AI systems such as ChatGPT are hardly being used for risky activities, and instead are used mostly for drafting documents and helping with writing code.

    “We want to make sure that high-value, low-risk use cases continue to be available for Europeans,” Crampton said. (ChatGPT, created by U.S. research group OpenAI, has Microsoft as an investor and is now seen as a core element in its strategy to revive its search engine Bing. OpenAI did not respond to a request for comment.)

    A recent investigation by transparency activist group Corporate Europe Observatory also said industry actors, including Microsoft and Google, had doggedly lobbied EU policymakers to exclude general-purpose AI like ChatGPT from the obligations imposed on high-risk AI systems.

    Could the bot itself come to EU rulemakers’ rescue, perhaps?

    ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.

    “The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms,” it said.

    The EU, however, has follow-up questions.



    [ad_2]
    #ChatGPT #broke #plan #regulate
    ( With inputs from : www.politico.eu )

  • Elon Musk goes to war with researchers

    Elon Musk goes to war with researchers

    [ad_1]

    musk tesla tweet trial 58619

    Press play to listen to this article

    Voiced by artificial intelligence.

    When Elon Musk bought Twitter, he promised an era of openness for the social media platform. Yet that transparency will soon come at a price.

    On Thursday, the social-networking giant will shut down free and unfettered access to reams of data on the company’s millions of users. As part of that overhaul, researchers worldwide who track misinformation and hate speech will also have their access shut down — unless they stump up the cash to keep the data tap on.

    The move is part of Musk’s efforts to make Twitter profitable amid declining advertising revenue, sluggish user growth and cut-throat competition from the likes of TikTok and Instagram.

    But the shift has riled academics, infuriated lawmakers and potentially put Twitter at odds with new content-moderation rules in the European Union that require such data access to independent researchers.

    “Shutting down or requiring paid access to the researcher API will be devastating,” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, who has spent years relying on Twitter’s API to track potentially harmful material online.

    “There are inequities in resources for researchers around the world. Scholars at Ivy League institutions in the United States could probably afford to pay,” she added. “But there are scholars all around the world who simply will not have the resources to pay anything for access to this.”

    The change would cut free access to Twitter’s so-called application program interface (API), which allowed outsiders to track what happened on the platform on a large scale. The API essentially gave outsiders direct access to the company’s data streams and was kept open to allow researchers to monitor users, including to spot harmful, fake or misleading content.

    A team at New York University, for instance, published a report last month on how far wide-reaching Russia’s interference in the 2016 U.S. presidential election had been by directly tapping into Twitter’s API system. Without that access, the level of Kremlin meddling would have been lost to history, according to Joshua Tucker, co-director at New York University’s Center for Social Media and Politics.

    Twitter did not respond to repeated requests to comment on whether this week’s change would affect academics and other independent researchers. The move still may not happen at all, depending on how Twitter tweaks its policies. The company’s development team said via a post on the social network last week it was committed to allowing others to access the platform via some form of API.

    “We’ll be back with more details on what you can expect next week,” they said.

    Yet the lack of details about who will be affected — and how much the data access will cost from February 9 — has left academics and other researchers scrambling for any details. Meanwhile, many of Twitter’s employees working on trust and safety issues have either been fired or have left the company since Musk bought Twitter for $44 billion in late October.

    In Europe’s crosshairs

    The timing of the change comes as the European Commission on Thursday will publish its first reports from social media companies, including Twitter, about how they are complying with the EU’s so-called code of practice on disinformation, a voluntary agreement between EU legislators and Big Tech firms in which these companies agree to uphold a set of principles to clamp down on such material. The code of practice includes pledges to “empower researchers” by improving their ability to access companies’ data to track online content.

    Thierry Breton, Europe’s internal market commissioner, talked to Musk last week to remind him about his obligations regarding the bloc’s content rules, though neither discussed the upcoming shutdown of free data access to the social network.

    “We cannot rely only on the assessment of the platforms themselves. If the access to researchers is getting worse, most likely that would go against the spirit of that commitment,” Věra Jourová, the European Commission’s vice president for values and transparency, told POLITICO.

    “It’s worrying to see a reversal of the trend on Twitter,” she added in reference to the likely cutback in outsiders’ access to the company’s data.

    While the bloc’s disinformation standards are not mandatory, separate content rules from Brussels, known as the Digital Services Act, also directly require social media companies to provide data access to so-called vetted researchers. By complying with the code of practice on disinformation, tech giants can ease some of their compliance obligations under those separate content-moderation rules and avoid fines of up to 6 percent of their revenues if they fall afoul of the standards.

    Yet even Twitter’s inclusion in the voluntary standards on disinformation is on shaky ground.

    The company submitted its initial report that will be published Wednesday and Musk said he was committed to complying with the rules. But Camino Rojo — who served as head of public policy for Spain and was the main person at Twitter involved in the daily work on the code since November’s mass layoffs — is no longer working at the tech giant as of last week, according to two people with direct knowledge of the matter, who spoke on the condition of anonymity to discuss internal discussions within Twitter. Rojo did not respond to a request for comment.

    American lawmakers are also trying to pass legislation that would improve researcher access to social media companies following a series of scandals. The companies’ role in fostering the January 6 Capitol Hill riots has triggered calls for tougher scrutiny, as did the so-called Facebook Files revelations from whistleblower Frances Haugen, which highlighted how difficult it remains for outsiders to understand what is happening on these platforms.

    “Twitter should be making it easier to study what’s happening on its platform, not harder,” U.S. Representative Lori Trahan, a Massachusetts Democrat, said in a statement in reference to the upcoming change to data access. “This is the latest in a series of bad moves from Twitter under Elon Musk’s leadership.”

    Rebecca Kern contributed reporting from Washington.

    This article has been updated to reflect a change in when the European Commission is expected to publish reports under the code of practice on disinformation.



    [ad_2]
    #Elon #Musk #war #researchers
    ( With inputs from : www.politico.eu )