Tag: Paul Scully

  • Big Tech lobbyists get stuck in to UK’s landmark competition bill

    Big Tech lobbyists get stuck in to UK’s landmark competition bill

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    LONDON — As the U.K. prepares to overhaul its competition regime, a fierce lobbying battle has broken out between the world’s largest tech companies and their challengers.

    Ministers are gearing up to publish new competition legislation in late-April, giving regulators more power to stop a handful of companies dominating digital markets.

    But concern over the U.S. tech giants’ influence in Westminster has prompted ministers close to the bill to warn that the new legislation could be watered down.

    Two ministers have expressed concerns that Big Tech firms are seeking to weaken the process for appealing decisions made by the country’s beefed-up competition regulator, according to multiple people who were either present at those discussions or whose organizations were represented there. They requested anonymity to discuss private meetings.

    One MP said a minister had also approached them to raise concerns, while at an industry roundtable, two ministers spoke of worry about Big Tech firms trying to influence the appeal mechanism. 

    An industry representative said: “There has been a sh*t load of lobbying from Big Tech, but I don’t know if they’ll succeed.” 

    Appealing to who? 

    The Digital Markets, Competition and Consumer Bill will give new powers to a branch of the Competition and Markets Authority called the Digital Markets Unit (DMU). Under the plan, the DMU will be able fine a company 10 percent of their annual turnover for breaching a code of conduct.

    The code, which has not yet been published, would be designed to ensure that a company with ‘strategic market status’ cannot “unfairly use its market power and strategic position to distort or undermine competition between users of the … firm’s services,” the government has said.

    Jonathan Jones, senior consultant in public law at Linklaters and formerly the head of the U.K. government’s legal department, wrote that the plan would have “very significant consequences” for Big Tech firms and could force them to “significantly alter” their business models.

    One of Big Tech’s concerns is that the bill will only allow companies to appeal decisions made by the DMU on whether or not the right process was followed, known as the judicial review standard, rather than the content or merit of the decision. That puts it in line with other regulators and should mean the process is faster, but it also makes it harder to appeal decisions.

    Big Tech firms want to be able to appeal on the “merit”, arguing it is unfair that they can’t challenge whether a DMU decision was correct or not. They also argue it won’t necessarily be slower than the judicial review standard.

    iStock 1335374389
    One of the biggest fears from medium-sized firms is that the biggest tech companies will use strategies to lengthen the appeals process or even get the entire bill delayed | iStock

    Tech Minister Paul Scully, who has responsibility for the bill, told POLITICO: “We want to make sure that the legislation is flexible, proportionate and fair to both big and challenger companies. Any remediation needs to be in place quickly as digital markets move quickly.” 

    One representative of a mid-sized tech firm said: “This is the fundamental point of contention and it will influence whether the bill works for SMEs and challengers against Big Tech. 

    “The fear is that big companies with big lawyers understand how to eke things out (during the appeals process) so that they’ll keep their market advantage for years. We’ve heard ministers express these concerns too.”

    Consumer group Which? is also urging the government to stay with its proposed appeal system. “For the DMU to work effectively, the government must stick to its guns and ensure that the decisions it reaches are not tied up in an elongated appeals process,” said director of policy, Rocio Concha.

    ‘Investigator and executioner’

    But Jones argued that the bill will make the DMU too powerful.

    “The DMU will have power to decide who it is going to regulate, set the rules that apply to them, and then enforce those rules,” he wrote. “This makes the DMU effectively legislator, investigator and executioner.”

    On the appeal method, Jones argued that it is an “oversimplification” to think that the government’s proposed standard of appeal would be quicker than one based on merits.

    Ben Greenstone, managing director of tech policy consultancy Taso Advisory, said: “I can understand the argument from both sides. The largest tech companies are incentivized to push back against this, but my guess is the government will keep the appeals process as it is, because it keeps it in line with the wider competition regime.”

    However, he added the bill would work better if some sort of compromise can be found with the biggest tech companies.

    The international playbook

    One of the biggest fears from medium-sized firms is that the biggest tech companies will use strategies already tried and tested abroad to lengthen the appeals process or even get the entire bill delayed.

    In the U.S., the Open App Markets Act has failed to pass following huge spends on lobbying.

    Rick VanMeter, executive director of the Coalition for App Fairness, which is based in the U.S. but has U.K. members, said: “In the U.S. we’ve learned that these mobile app gatekeepers’ will stop at nothing to preserve the status quo and squash their competition.

    “To be successful, policymakers around the world must see through these gatekeepers’ efforts for what they are: self-serving attempts to retain their market power.”

    Google and Microsoft declined to comment. Apple did not respond.



    [ad_2]
    #Big #Tech #lobbyists #stuck #UKs #landmark #competition #bill
    ( With inputs from : www.politico.eu )

  • UK goes light-touch on AI as Elon Musk sounds the alarm

    UK goes light-touch on AI as Elon Musk sounds the alarm

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    LONDON — As Elon Musk urged humanity to get a grip on artificial intelligence, in London ministers were hailing its benefits.

    Rishi Sunak’s new technology chief Michelle Donelan on Wednesday unveiled the government’s long-awaited blueprint for regulating AI, insisting a heavy-handed approach is off the agenda.

    At the heart of the innovation-friendly pitch is a plan to give existing regulators a year to issue “practical guidance” for the safe use of machine learning in their sectors based on broad principles like safety, transparency, fairness and accountability. But no new legislation or regulatory bodies are being planned for the burgeoning technology.

    It stands in contrast to the strategy being pursued in Brussels, where lawmakers are pushing through a more detailed rulebook, backed by a new liability regime.

    Donelan insists her “common-sense, outcomes-oriented approach” will allow the U.K. to “be the best place in the world to build, test and use AI technology.”

    Her department’s Twitter account was flooded with content promoting the benefits of AI. “Think AI is scary? It doesn’t have to be!” one of its posts stated on Wednesday.  

    But some experts fear U.K. policymakers, like their counterparts around the world, may not have grasped the scale of the challenge, and believe more urgency is needed in understanding and policing how the fast-developing tech is used.

    “The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software,” Michael Birtwistle, associate director of data and AI law and policy at the Ada Lovelace Institute, said. It has “significant gaps,” which could leave harms “unaddressed,” he warned.

    “We shouldn’t be risking inventing a nuclear blast before we’ve learnt how to keep it in the shell,” Connor Axiotes, a researcher at the free-market Adam Smith Institute think tank, warned.

    Elon wades in

    Hours before the U.K. white paper went live, across the Atlantic an open letter calling for labs to immediately pause work training AI systems to be even more powerful for at least six months went live. It was signed by artificial intelligence experts and industry executives, including Tesla and Twitter boss Elon Musk. Researchers at Alphabet-owned DeepMind, and renowned Canadian computer scientist Yoshua Bengio were also signatories.

    The letter called for AI developers to work with policymakers to “dramatically accelerate development of robust AI governance systems,” which should “at a minimum include: new and capable regulatory authorities dedicated to AI.” 

    AI labs are locked in “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter warned.

    GettyImages 1244395795
    Rishi Sunak’s new technology chief Michelle Donelan unveiled the government’s blueprint for regulating AI, insisting a heavy-handed approach is off the agenda | Leon Neal/Getty Images

    Back in the U.K., Ellen Judson, head of the Centre for the Analysis of Social Media at the think tank Demos, warned that the U.K. approach of “setting out principles alone” was “not enough.”

    “Without the teeth of legal obligations, this is an approach which will result in a patchwork of regulatory guidance that will do little to fundamentally shift the incentives that lead to risky and unethical uses of AI,” she said.

    But Technology Minister Paul Scully told the BBC he was “not sure” about pausing further AI developments. He said the government’s proposals should “dispel any of those concerns from Elon Musk and those other figures.”

    “What we’re trying to do is to have a situation where we can think as government and think as a sector through the risks but also the benefits of AI — and make sure we can have a framework around this to protect us from the harms,” he said.

    Long time coming

    Industry concerns about the U.K.’s ability to make policy in their area are countered by some of those who have worked closely with the British government on AI policy. 

    Its approach to policymaking has been “very consultative,” according to Sue Daley, a director at the industry body TechUK, who has been closely following AI developments for a number of years.

    In 2018 ministers set up the Centre for Data Ethics and Innovation and the Office for AI, working across the government’s digital and business departments until it moved to the newly-created Department for Science, Innovation and Technology earlier this year. 

    The Office for AI is staffed by a “good team of people,” Daly said, while also pointing to the work the U.K.’s well-regarded regulators, like the Information Commissioner’s Office, had been doing on artificial intelligence “for some time.”

    Greg Clark, the Conservative chairman of parliament’s science and technology committee, said he thought the government was right to “think carefully.” The former business secretary stressed that is his own view rather than the committee view.

    “There’s a danger in rushing to adopt extensive regulations precipitously that have not been properly thought through and stress-tested, and that could prove to be an encumbrance to us and could impede the positive applications of AI,” he added. But he said the government should “proceed quickly” from white paper to regulatory framework “during the months ahead.”

    Public view

    Outside Westminster, the potential implications of the technology are yet to be fully realized, surveys suggest.

    Public First, a Westminster-based consultancy, which conducted a raft of polling into public attitudes to artificial intelligence earlier this month, found that beyond fears about unemployment, people were pretty positive about AI.

    “It certainly pales into insignificance compared to the other things that they are worried about like the prospect of armed conflict, or even the impact of climate change,” James Frayne, a founding partner of Public First, who conducted the polling said. “This falls way down the priority list,” he said.

    But he cautioned this could change. 

    “One assumes that at some point there will be an event which shocks them, and shakes them, and makes them think very differently about AI,” he added. 

    “At that point there will be great demands for the government to make sure that they’re all over this in terms of regulation. They will expect the government to not only move very quickly, but to have made significant progress already,” he said.



    [ad_2]
    #lighttouch #Elon #Musk #sounds #alarm
    ( With inputs from : www.politico.eu )