Sunday, September 8, 2024

Can China and the West agree on global AI rules amid existential risks?

Must read

In the West, AI technologies are primarily being developed by private tech firms to serve companies and individuals, motivated by the goal of achieving market dominance.

But that is not entirely the case in China. “The Chinese government is actually in the game of developing AI,” Jia said.

While Chinese tech giants like Baidu and Tencent are also aiming for market dominance, the Chinese government is much more heavily involved in the actual development of AI technologies compared to those in Western economies.

The intervention of Chinese authorities at the development level is based mainly on Beijing’s desire to use such technologies for mass surveillance, and to improve domestic technological expertise to reduce dependence on Western technology, according to Jia.

“Because China’s goals are vastly different [from the West’s] – one is to succeed in the market, the other is to use methods to strengthen control and also flex their strengths and showcase their supremacy – you can imagine their concerns are naturally going to be different,” Jia added.

02:12

Inside a Chinese factory that makes humanoid robots with enhanced facial movements

Inside a Chinese factory that makes humanoid robots with enhanced facial movements

The contrarian views – and concerns – of China and the West were spotlighted during a symposium in Beijing in June, when Zhang Linghan, a professor at the Institute of Data Law at China University of Political Science and Law, indicated that different countries had “different perspectives, histories and frameworks of laws”.

While countries could learn from each other’s regulations, she said differences between the European Union and China “cannot [be] ignored”, adding that some risks that were highlighted in Europe were in fact acceptable and legal in China.

“It is [due to] the difference in culture and the difference of situation,” said Zhang, who sits on a United Nations high-level advisory body on AI.

At the same symposium, an European diplomat elaborated on the EU Artificial Intelligence Act (AI Act) adopted in May, as well as the bloc’s priorities in the regulation of AI.

Marjut Hannonen, head of the European Union’s trade delegation to Beijing, said Europe’s “most critical” concerns were to ensure that the safety and fundamental rights of its citizens would be safeguarded.

Under the act, applications deemed too dangerous – including those that manipulate people’s free will or have uses for social surveillance – are banned. “We don’t allow that,” Hannonen said.

You Chuanman, director of the Centre for Regulation and Global Governance under the Institute for International Affairs at the Chinese University of Hong Kong’s Shenzhen campus, stressed the roles of “cultural differences” between China and the West, particularly in terms of human rights.

The distinctions could translate into significant hurdles when countries sit down to discuss global rules over the use of technologies involving facial recognition or surveillance, simply based on how the different governments operate.

Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, who researches issues in global technology, said the Chinese government’s top concern over AI was how it would affect online content and information.

Its earliest binding regulations, he said, focused on the role of the emerging technology in the creation and dissemination of content, including both recommendation algorithms and AI-generated content.

“China says these regulations are aimed at creating a healthy online environment, but experts in the US and EU would just call it censorship,” he said.

Jia, with the USC Marshall School of Business, said Western governments were typically concerned with issues like privacy, transparency, bias, fairness and accountability – concerns that stemmed from individual users of the technologies and activists.

“In China, with the Chinese government, its goal is maybe to have effective surveillance and there does not exist an activist community to ‘bug you’,” she said.

“A lot of the concerns that are high on the regulator’s mind in Western countries … are not relevant as there are no such underlying forces trying to achieve them in the China context.”

The divergence represented a “fundamental difference in values” between China and most liberal democracies in the West, said Weifeng Zhong, an affiliated scholar at George Mason University’s Mercatus Centre.

In the West, civil liberties take priority, and the use of AI – and more broadly of technology – must respect that.

In contrast, under Chinese governance, technology was supposed to serve the “greater, collective good, but what is good for society is often determined by the regime,” Zhong said.

“That is why AI-powered surveillance in China can easily serve oppressive purposes in the name of enhancing safety and order. We have seen this divide since the dawn of the internet, and the gulf in the age of AI will only be larger.”

Jia called China’s conflicting approach with the West – specifically, its domestic governance of AI – a reflection of an “underlying gap in ideology” which would only grow bigger in a world fraught with geopolitical rivalries, as well as a potential hiccup that could slow global progress toward unified AI regulation.

“It will be surprising actually if [countries] can easily achieve agreements over regulatory issues and the governance of AI,” she said, adding that an absence of trust between China and the West could make agreeing on global rules increasingly difficult.

“AI is the facade of geopolitical tensions. This is not just a technology [issue]. It’s deeply intertwined with politics.”

Zhong said the current debates around risks in AI mirrored the broader differences in the views on human rights and freedoms – which China and the West still debate – adding that he was not optimistic that the two sides would resolve their differences any time soon.

What then would the global governance of AI look like? It might follow a similar path as that of the internet, he suggested.

“The Chinese regime has a very different view from the West on information and how freely it should be able to flow domestically and across borders. The result of that divide is now a rather fractured World Wide Web,” he said.

“There was a time earlier on in China’s economic reforms when it appeared as though China would become an open society, but that ship seems to have sailed.”

Over the past year, China has signalled its ambitions to play a bigger role in setting global rules and standards over AI, and it has sought greater cooperation on the emerging technology with other countries.

02:24

Japanese AI app can detect when cats are in pain

Japanese AI app can detect when cats are in pain

At the China-Africa Internet Development and Cooperation Forum in April, both sides recognised the need to cooperate more on AI, calling for more technology research, development and applications, as well as increased dialogue.

Earlier in October, China proposed its own framework – the Global AI Governance Initiative – which calls for equal rights on AI development for all countries and joint efforts to tackle the misuse of technologies by terrorists.

Chinese Premier Li Qiang this month called for more inclusive development of AI, urging countries to bridge an “intelligence gap” and to work together to foster a “fair and open” environment so that more countries could benefit from the emerging technology.
In one recent example of cooperation between China and the West, the UN General Assembly adopted a China-sponsored resolution this month urging the international community to ensure that developing nations have equal opportunities to benefit from AI.

The non-binding resolution was co-sponsored by more than 140 countries, including the United States.

You, from the Chinese University of Hong Kong’s Shenzhen campus, called the resolution an achievement, saying that it was a “small step” for countries to move forward with.

“That is also how global governance achieves its goal. We start with areas in which we do not dispute or disagree with each other … we build the foundation, then along the way try to find other consensus,” he said.

But other aspects of AI – such as military applications – might be increasingly difficult for countries to agree on, given that AI is now considered by many governments as a national priority.

“It is one of the most important technologies of the 21st century geopolitical struggle,” You said.

Still, despite the deep-seated differences between Chinese and Western societies, there could be areas of shared interests that they could cooperate on, even if they are limited.

Apart from issues such as access to AI for developing economies – which was included in the recent UN resolution – You suggested that countries could mutually explore topics surrounding the energy resources needed to sustain future AI innovations.

Zhong suggested collaboration on mitigating potential existential risks to humans that advanced AI could deliver, adding that such a threat should be something even countries with very different values could stand behind.

Sheehan, from the Carnegie Endowment for International Peace, suggested that the “only hope” for global governance of AI was if countries focused on a “very narrow set of problems” where every country had an interest in resolving them.

One example is the proliferation of powerful AI systems – like those used to turbocharge hacking capabilities – to non-state actors around the world.

“Both the US and China will be using AI systems to hack each other, but neither country wants those systems to be in the hands of terrorists or criminal syndicates,” he said.

Jeffrey Ding, an assistant professor of political science at George Washington University, added that there was still potential for proactive global governance on AI safety issues despite the different domestic regulatory approaches.

“Even during the most intense periods of the Cold War, the US still cooperated with the Soviet Union on nuclear safety and security issues because it was in everyone’s national security interest to avoid accidental or unauthorised nuclear detonations,” he said.

“Similarly, when it comes to controlling powerful AI systems … there is ample room for international cooperation and coordination.”

Jia said that while it would be a tall order for China to build a consensus with the West on some issues, including those that might appear to endanger the Chinese government’s grip on power, conversations should still take place.

“Without conversation, there’s zero probability of finding common ground, no matter how small it is,” she said, while adding a cautionary note.

“Conversation is necessary, but the hope of having a global AI [framework] that works for everybody should not be high.”

Latest article