In 1996, John Perry Barlow published “A Declaration of the Independence of Cyberspace” which began:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
This was always a pipe dream. In our world, any activity with significant social and economic impact eventually gets regulated.
Regulation – and the future, as Gibson observed – arrives at differing speeds. Tobacco product manufacturers escaped meaningful regulation for centuries. For Internet services, it was two or three decades, with regulation just getting seriously underway now. Cryptocurrencies will face increasingly strict regulation less than 15 years after the 2009 launch of Bitcoin, driven in part by the recent collapse of FTX.
Artificial intelligence has enjoyed a longer holiday from regulation than some more recent technologies. This is largely because AI had relatively little technical success or impact between 1955 – when John McCarthy coined the term “artificial intelligence” in a proposal for a Dartmouth College workshop – and the success of deep learning in the 2012 ImageNet competition. In the decade since then, however, the global impact of AI has increased rapidly, and regulation has now followed.
AI regulation is emerging in very different ways around the world. This is not unexpected, because how to regulate involves difficult challenges, including the pacing problem (i.e. the fact that new technologies often develop faster than regulators can act) and finding the right balance between addressing ethical concerns and avoiding stifling of innovation. This blog considers the very different approaches in the world’s three biggest AI markets: the European Union, China and the United States.
In the European Union, there is an enduring belief that regulatory control can be a tool for economic and social progress. A European Commission website on the EU digital strategy proclaims: “The European approach [to the digital future] … will … ensure that Europe seizes the opportunity and gives its citizens, businesses and governments control over the digital transformation.”
While we admire many of the goals of EU regulation and the rights that it gives to EU residents, this approach has been a failure for the EU technology industry. Very few of the world’s leading technology companies come from Europe. Some have suggested that this is due in part to a lack of a culture of innovation across Europe’s mature societies – but if so, regulation is not the answer.
The EU seems to be making this mistake again with the recently proposed EU AI Act, which would subject AI design and applications to requirements including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, and societal and environmental well-being and accountability. While these are admirable goals, the proposed approach of the AI Act to them is highly regulatory.
The AI Act differentiates among four levels of risk of AI systems:
The “high risk” category is crucial, because it applies broadly to applications in specified sectors including biometrics, critical infrastructure, education, employment, public services, law enforcement, migration and border control, and administration of justice. LearnerShape recently participated in a study of these requirements by Open Loop, which concluded that many of these proposed requirements may be impractical, overbroad or unclear. There are many innovative AI companies in Europe, and we are hopeful that the European Commission will moderate its proposals for the AI Act to avoid stifling this innovation.
The United States is at the other end of the regulatory spectrum, at least among developed countries. US regulators tend to take a light touch, a tendency which strengthened during the Trump administration. Even with some significant reversals during the Biden administration – especially for antitrust enforcement – the US remains a comparatively unregulated market. This is especially true in the technology sector, and is a major contributor to the growth of world leaders like Apple, Amazon, Google, Meta, Microsoft and Netflix.
The White House recently proposed a Blueprint for an AI Bill of Rights, which suggests directions that the US may take in regulating AI, and sets out the following principles:
There is a lot of resemblance between these principles and those underlying the EU AI Act, but we can be certain that in implementing these principles the US will keep a lighter touch than the EU.
AI regulation in China – like much of Chinese policy – is connected to long-term plans for the future of China, including the overarching goal to “build a modern socialist country that is prosperous, strong, democratic, culturally advanced and harmonious” by 2049, the 100-year anniversary of the founding of the People’s Republic of China.
China has an intense focus on building a world leading AI industry, including under the 2017 New Generation Artificial Intelligence Plan. The plan articulates goals:
To seize the major strategic opportunity for the development of AI, to build China’s first-mover advantage in the development of AI, [and] to accelerate the construction of an innovative nation and global power in science and technology.
These efforts are producing results. For example, China is the leader in numbers of AI paper publications and patents, although US publications continue to be more influential on average. And Chinese AI companies like iFlytek and SenseTime are having global impact, although this has been somewhat blunted by US restrictions associated with the emerging trade war between the United States and China.
At the same time, China has made serious efforts to develop AI ethical guidelines, which articulate principles not dissimilar to those being developed in Europe and the United States. Prominent examples include the Beijing AI Principles, released in 2019 by the Beijing Academy of Artificial Intelligence and setting out principles for research and development, use and governance of AI, and the White Paper on AI Standards, published in 2018 by the Standardization Administration of China.
On legal regulation of AI, China is charting a middle way between Europe and the United States, moving incrementally. This has included the distinctly Chinese approach of testing legislation at city or province level before adopting national frameworks. A leading example of this is a new regulation in Shenzhen, China’s leading technology city, which includes aspects such as data governance and oversight, and investment promotion. On the national level, an application specific regulatory framework for AI algorithms, the Regulation on Algorithmic Recommendations was adopted in 2022, providing users of AI algorithms with rights to be informed, to opt out, to delete personal characteristics and to not be subject to unreasonable differential treatment.
We are in early days of the progress of AI. AI regulation will continue to chase that progress, and governments around the world will certainly continue to pursue diverse regulatory approaches. And while we respect the principles-driven approach of the EU, it appears fairly certain to us that the more flexible and practical approaches of the United States and China will ultimately be more influential, not least because they will promote the innovation that is core to the future of AI.
Maury Shenk, Founder & CEO, LearnerShape
Jonas Weinberger, Co-Founder, DLT360