New Delhi: The government is not considering bringing a law or regulating the growth of artificial intelligence (AI) in the country, as generative AI-based chatbots become a rage across the industry.
In a reply to a question in Lok Sabha, the Ministry of Electronics and Information Technology (MeitY) said it sees AI as a significant and strategic area for the country and technology sector.
“AI will have a kinetic effect for the growth of entrepreneurship and business and the government is taking all necessary steps in policies and infrastructure to develop a robust AI sector in the country,” said the Ministry.
The government published the National Strategy for Artificial Intelligence in June 2018 and proposes to develop an ecosystem for the research and adoption of AI.
MeitY said it has established Centres of Excellence in various emerging technologies including AI to explore opportunities in these specialised fields.
“These centres provide start-ups with premium plug-and-play co-working spaces and access to the ecosystem,” it added in its reply.
India is also a founding member of Global Partnership on Artificial Intelligence (GPAI).
In an earlier interview with IANS, Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, had said that the government aims to make India a global powerhouse of AI which does not just stop on integrating foreign chatbots but building next-generation AI-based innovations to empower billions of citizens.
“AI will certainly transform the digital economy and grow the business economy in the country. AI is a ‘kinetic enabler’ of the digital economy and we want to be the global leader in AI,” the Minister had told IANS.
NITI Aayog has also published a series of papers on the subject of ‘Responsible AI for All’.
More than 1,900 AI-focused startups are providing innovative solutions in the country, primarily in the areas of conversational AI, NLP, video analytics, disease detection, fraud prevention and deep fakes detection.
New Delhi: Rajya Sabha’s standing committee on commerce, in a recent report on regulation of e-commerce, has recommended a “Digital Market Division” within the Competition Commission of India (CCI) be created as an expert division, specifically tasked with regulation of digital markets.
In its report, which was recently presented in Parliament, the panel has also urged the government to formulate a national cybercrime policy, which holds significance amid increasing reliance on digital technology.
It has asked the government to formulate a comprehensive national cybercrime policy or a legislation, in consultation with stakeholders and industry experts.
The committee, which is headed by Congress MP Abhishek Manu Singhvi, has said in its report that “the presence of an overarching regulatory body, that glues together different ministries and departments and authorities that presently regulate e-commerce, will strengthen the regulatory regime and bridge the existing gaps in enforcement”.
The committee recommended that a Digital Market Division within the CCI be created as an expert division, specifically tasked with regulation of the digital markets with participation from all the existing regulators concerned with e-commerce such as Department for Promotion of Industry and Internal Trade, Ministry of Consumer Affairs, Food and Public Distribution, Ministry of Electronics and Information Technology as well as the Reserve Bank of India (RBI).
While highlighting the significance of a national cybercrime policy, the panel noted that the government has adopted a fragmented approach with regard to matters relating to cybercrimes.
It further observed that such fragmented approach will not serve the purpose keeping in view the critical nature of the cyber infrastructure with increasing reliance on digital technology.
It therefore suggested that “cybercrimes and its related matters such as skilling and training in digital crimes investigation, creation of dedicated cybercrime division, cyber security standards, investigation process and grievance redressal mechanism, merit attention in the form of a National Cybercrime Policy”.
Tehran: The Iranian nuclear chief has said that Tehran and the International Atomic Energy Agency (IAEA) have agreed to regulate their relations on the basis of the safeguards agreements.
President of the Atomic Energy Organisation of Iran (AEOI) Mohammad Eslami made the remarks in an address to a joint press conference with visiting IAEA Director General Rafael Grossi in Tehran following their meetings earlier on Saturday.
Eslami said basing the two sides’ relations on the safeguards agreements helps the IAEA be assured of Iran’s nuclear activities and prevent any discrepancy or contradiction, Xinhua news agency reported.
The AEOI President noted that the communication “should be in a way to build trust,” adding the two sides should shield it from external interference so as to let cooperation and exchange continue in a “trustworthy manner” for resolving their issues.
He revealed that the AEOI and the agency have agreed that the latter should take part in the 30th Iranian Nuclear Conference to know better about Iran’s nuclear programme and the capabilities of the country’s scientists.
On the possibility of the issuance of an anti-Iran resolution in the next meeting of the IAEA Board of Directors, Eslami said should such a thing take place, Iranian authorities will definitely make decisions accordingly and the AEOI will act based on them.
Grossi, for his part, said the IAEA is ready to continue its cooperation with Iran and seeks to have a “serious and systematic” dialogue with Iran, adding that the talks on the JCPOA’s revival are on the agenda and will continue.
The cooperation between the agency and Tehran and the “good agreement” the two sides are expected to reach will contribute to the JCPOA’s revival, he noted.
He condemned any military action against nuclear facilities and power plants anywhere in the world.
He also gave the assurance that the IAEA has never been and will not ever be used as a political tool.
In recent months, the IAEA has criticised Iran for its lack of cooperation with the agency.
In November last year, the IAEA’s Board of Governors passed a resolution proposed by the US, Britain, France and Germany that called on Iran to collaborate with the agency’s investigators regarding the alleged “traces of uranium” at a number of its “undeclared” sites.
Iran has repeatedly rejected such allegations and insisted on the peaceful nature of its nuclear programme.
Iran signed the JCPOA with world powers in July 2015, agreeing to put some curbs on its nuclear programme in return for the removal of the sanctions on the country. The US, however, pulled out of the deal in May 2018 and reimposed its unilateral sanctions on Tehran, prompting the latter to reduce some of its nuclear commitments under the deal.
The talks on the JCPOA’s revival began in April 2021 in Vienna. No breakthrough has been achieved after the latest round of talks in August 2022.
Artificial intelligence’s newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.
The chatbot dazzled the internet in past months with its rapid-fire production of human-like prose. It declared its love for a New York Times journalist. It wrote a haiku about monkeys breaking free from a laboratory. It even got to the floor of the European Parliament, where two German members gave speeches drafted by ChatGPT to highlight the need to rein in AI technology.
But after months of internet lolz — and doomsaying from critics — the technology is now confronting European Union regulators with a puzzling question: How do we bring this thing under control?
The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc’s draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as “high-risk,” binding developers to stricter requirements of transparency, safety and human oversight.
The catch? ChatGPT can serve both the benign and the malignant.
This type of AI, called a large language model, has no single intended use: People can prompt it to write songs, novels and poems, but also computer code, policy briefs, fake news reports or, as a Colombian judge has admitted, court rulings. Other models trained on images rather than text can generate everything from cartoons to false pictures of politicians, sparking disinformation fears.
In one case, the new Bing search engine powered by ChatGPT’s technology threatened a researcher with “hack[ing]” and “ruin.” In another, an AI-powered app to transform pictures into cartoons called Lensa hypersexualized photos of Asian women.
“These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable,” said Gary Marcus, an AI expert and vocal critic.
These AIs “are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose,” said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.
Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.
The rise of ChatGPT is now forcing the European Parliament to follow suit. In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.
The idea was met with skepticism by right-leaning political groups in the European Parliament, and even parts of Tudorache’s own Liberal group. Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament’s position, said that the amendment “would make numerous activities high-risk, that are not risky at all.”
The two lead Parliament lawmakers are working to impose stricter requirements on both developers and users of ChatGPT and similar AI models | Pool photo by Kenzo Tribouillard/EPA-EFE
In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum. “It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” said Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy.
The two lead Parliament lawmakers are also working to impose stricter requirements on both developers and users of ChatGPT and similar AI models, including managing the risk of the technology and being transparent about its workings. They are also trying to slap tougher restrictions on large service providers while keeping a lighter-tough regime for everyday users playing around with the technology.
Professionals in sectors like education, employment, banking and law enforcement have to be aware “of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals,” Benifei said.
If Parliament has trouble wrapping its head around ChatGPT regulation, Brussels is bracing itself for the negotiations that will come after.
The European Commission, EU Council and Parliament will hash out the details of a final AI Act in three-way negotiations, expected to start in April at the earliest. There, ChatGPT could well cause negotiators to hit a deadlock, as the three parties work out a common solution to the shiny new technology.
On the sidelines, Big Tech firms — especially those with skin in the game, like Microsoft and Google — are closely watching.
The EU’s AI Act should “maintain its focus on high-risk use cases,” said Microsoft’s Chief Responsible AI Officer Natasha Crampton, suggesting that general-purpose AI systems such as ChatGPT are hardly being used for risky activities, and instead are used mostly for drafting documents and helping with writing code.
“We want to make sure that high-value, low-risk use cases continue to be available for Europeans,” Crampton said. (ChatGPT, created by U.S. research group OpenAI, has Microsoft as an investor and is now seen as a core element in its strategy to revive its search engine Bing. OpenAI did not respond to a request for comment.)
A recent investigation by transparency activist group Corporate Europe Observatory also said industry actors, including Microsoft and Google, had doggedly lobbied EU policymakers to exclude general-purpose AI like ChatGPT from the obligations imposed on high-risk AI systems.
Could the bot itself come to EU rulemakers’ rescue, perhaps?
ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.
“The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms,” it said.
The EU, however, has follow-up questions.
[ad_2]
#ChatGPT #broke #plan #regulate
( With inputs from : www.politico.eu )
The White House and National Security Council declined to comment.
An order along those lines would be far more modest than some of the investment restrictions Biden and Congress considered last year. Then, policymakers proposed setting up a government review board that could deny U.S. deals in a wide swath of Chinese industries — including microchips, AI, quantum computing, clean energy and biotechnology — when they felt national security was at risk.
Backing away from those plans would represent a setback for China hawks in the White House, who have led a campaign to undermine Beijing’s high-tech industries, and could slow the momentum toward strategic separation — or “decoupling” — between American and Chinese industries. And it would underscore how even as diplomatic relations between Washington and Beijing nosedive, strong economic interests continue to bind the U.S. and China together.
Officials in the administration and Congress who have advocated a tougher line with China will be “very disappointed” if the eventual order “falls short of having the authority to reject deals” between U.S. and Chinese firms, said Eric Sayers, a former staffer for the U.S. Indo-Pacific Command during the Trump administration.
But even a scaled back executive order would represent a new chapter for federal oversight of American business overseas. Until recently, the U.S. government largely allowed American business free rein in the world’s second largest economy. But China’s use of U.S. technology and funding to develop its advanced microchips, weapons systems and other defense industries has pushed national security officials to argue for more oversight in recent years. Executive action scrutinizing so-called “outbound investments” represents the next step of that campaign to curtail Chinese technological development, even if it is less aggressive than earlier plans.
“While this [executive order] is the first official step, we shouldn’t expect it to be the last,” said Sayers, now managing director at D.C. consulting firm Beacon Global Strategies. He noted that past investment screening policies, like the Committee on Foreign Investment in the United States, took decades to be fully established. “This will likely be an additive process that grows over time through both executive powers and legislative action,” he said.
Though the final order is still in flux, the administration is likely to set up a pilot program under which U.S. firms doing new deals with Chinese artificial intelligence and quantum computing firms would have to disclose details to government authorities. Biotech and clean energy deals are now likely to be left out of the initial executive order, the people with knowledge said, though regulatory efforts could be extended after the pilot program and opportunities for comment from industry and outside groups.
Such an order would represent a setback for national security leaders in the White House, led by the National Security Council, who have advocated for a more aggressive approach. Last September, national security adviser Jake Sullivan said in a speech that the administration would aim to undermine Chinese development across a numbers of sectors — AI, quantum, chips, biotech and clean energy — that were subject to the original executive order discussions.
But despite continued tensions over Taiwan and the recent surveillance balloon debacle, the administration has since narrowed its approach at the request of the Treasury Department, which has long opposed an aggressive approach to outbound investments and has been meeting with U.S. financial firms since last fall. Momentum for the NSC’s more aggressive approach also slowed after the departure last fall of one of Sullivan’s key deputies, Peter Harrell, who had helped lead the economic campaign against Beijing.
Momentum in Congress also appears to have slowed. Over the past two years, lawmakers have debated bipartisan legislation that would have set up a new federal review panel headed by the U.S. Trade Representative with broad authority to review and deny American investments across a wide swath of the Chinese economy. But they were ultimately unsuccessful in attaching the bill to Congress’ CHIPS Act last year or the yearly defense spending bill.
Now, some Republicans in the House are advocating a narrower approach, with leaders of the House Financial Services Committee pushing legislation that would expand the federal government’s ability to blacklist Chinese firms, but not set up new federal oversight authority. “For the U.S. to compete with China, we cannot become more like the Chinese Communist Party,” Chair Patrick McHenry said at a hearing earlier this month.
The debate will now turn to the Senate, where the Banking Committee will hold a hearing Tuesday on sanctions, export controls and “other tools” like outbound investment screening. While Chair Sherrod Brown (D-Ohio) has been generally supportive of efforts to increase oversight of U.S. firms in China, it is still unclear what changes he and ranking member Tim Scott (R-S.C.) will seek to the bipartisan bill debated last year.
[ad_2]
#White #House #scales #plans #regulate #U.S #investments #China
( With inputs from : www.politico.com )