“Maybe it meant something. Maybe not, in the long run, but no explanation, no mix of words or music or memories can touch that sense of knowing that you were there and alive in that corner of time and the world. Whatever it meant.” ― Hunter S. Thompson, Fear and Loathing in Las Vegas
In Fear and Loathing in Las Vegas, gonzo journalist Raoul Duke (the alter ego of author Hunter S. Thompson) and his lawyer Doctor Gonzo travel to and around Las Vegas in a drug-induced haze while attempting to report on the Mint 400 motorcycle race, at the same time philosophizing about 1960s counterculture values and the decline of the American Dream.
It feels like the current debate about artificial intelligence is in about the same place.
Two things struck me about the audience engagement at recent speaking engagements for the London financial community – a seminar on AI and law for the Financial Services Club:
and a general conversation about AI at the annual OMFIF Digital Monetary Institute symposium:
First, people have great interest and concern about the risks of AI. Second, they are much more favorably inclined toward regulation than is usual for business people.
These reactions are perhaps even stronger than the amazement at the potential benefits of AI, which to my optimistic mind outweigh the risks.
These London financial folks are not alone. Even OpenAI CEO Sam Altman – perhaps the most influential individual in the current AI landscape – warned that AI could cause “harm to the world” and called for AI regulation, including a new US government agency, in recent testimony to US Congress.
With this widespread concern, surely something must be done. But what? At present, there is a marked lack of direction on approaches to managing AI.
The confusion and paralysis are exemplified by a well-intentioned but misguided letter originated in April by Max Tegmark’s Future of Life Institute recommending that we “pause giant AI experiments” and signed by numerous well-known people (including Elon Musk, Yoshio Bengio, Stuart Russell, Emad Mostaque and Steve Wozniak). Misguided because it simply ain’t going to happen – the barn door is open and the horsemen of AI are galloping about the world paying little or no attention to well-intentioned letters.
Yet the risks of AI are real. The technology community, government and society at large must cooperate to decide who will take practical action – and then do so.
In the OMFIF interview above, I articulated a four-part taxonomy of risk and have since added two more:
There are not yet good solutions to these issues, and no one knows the best way forward. But I offer two principles that should be central to our solutions:
It is essential that we must replace generalized fear and complaints about AI risks with action to address specific problems.
To illustrate, let’s look at a specific, large example of an AI problem that we need to solve. Probably the largest single type of damage caused by AI to date is the effect of social media recommendation algorithms in promoting social division and filter bubbles. This is a classic example of AI mis-alignment. At least initially, social media companies wanted their recommendation algorithms to show people what they want to see. However, we now know that the algorithmic goal of promoting “engagement” with content is often best fulfilled by content that is extreme and or generates anger. The effects are well-known (try a Google search on “social media divides us”).
There are practical solutions to this problem. A leading approach is to put greater responsibility on social media platforms and other internet companies to review and police the content posted by their users. This is the approach of the EU Digital Services Act, which was adopted in October 2022 and takes effect in February 2024, and the UK Online Safety Bill, which is pending in Parliament. This legislation is far from perfect – and will have collateral consequences like restricting some legitimate speech – but it’s a practical approach to the problem.
At LearnerShape, we are focused on using AI to deliver practical solutions to learning problems. By deploying open source learning infrastructure, we aim to make AI and other technologies widely available to create customized learning applications that can solve specific educational challenges. The potential for AI to offer major benefits for education is becoming ever clearer, particularly with recent advances in generative AI. (Some particularly vivid illustrations of this were recently presented by Sal Khan in a TED talk on how the Khan Academy is using generative AI.)
The second principle for dealing with AI risk is to avoid the temptation to think that regulation can be the primary solution. This temptation seems to have befallen the EU AI Act, which has some beneficial elements but appears impractical and likely to have a devastating effect on the EU AI sector.
Legislation and regulation need to be part of broader solutions if they are not to become major problems in their own right.
Returning to the example of division caused by recommendation algorithms, I observe the challenges mainly from the viewpoint of two countries – the United States (where I was born, educated and began my career) and the United Kingdom (where I have lived for more than 20 years). Somehow, these two countries, despite many cultural similarities, have shown markedly different degrees of social division. The United States is becoming tribal, with people in different parts of the political spectrum finding it difficult even to have civil conversations with each other. The United Kingdom, despite the paroxysm of Brexit, is culturally in a much less conflictual place, with an effective multicultural society, as was recently demonstrated at the coronation of King Charles III and Queen Camilla (as The Spectator intelligently observed).
Why this difference? The answer is not simple or obvious, with complex influences of the differing current and historical circumstances of the US and UK. In a recent appearance on the Lex Fridman Podcast, Max Tegmark (of the Future of Life Institute and its letter discussed above) attributed the impending damage of AI and many of the problems of our societies to “Moloch” – with reference to the excellent 2014 essay Meditations on Moloch by Scott Alexander. The essential (and subtle) point is that civilization creates tendencies to self-destruction – personified by the Canaanite god Moloch – that are independent of the good or bad will of any individual. In the face of this tendency to social entropy, our challenge is to find societal and cultural paths that take us in a direction of cohesion, success and happiness and away from division, decay and misery.
This is a big challenge. How do we re-establish a broadly accepted social contract? No one knows how to make this happen, and extreme groups at both ends of the political spectrum prefer conflict. This is not a problem that can easily be addressed by regulation, and certainly not by regulation alone.
Finally, turning away from the fear, loathing and hype-induced haze that AI seems to be threatening, it is important to remember the optimistic likelihood that the opportunities of AI are much greater than the risks. Given the amazingly interesting developments and challenges that AI presents, in Hunter S. Thompson’s words that begin this blog, it is an amazing privilege to be “alive in th[is] corner of time and the world”.
We’ll continue to do a lot of work at LearnerShape on the opportunities of AI, while aiming to control the risks. I’ll personally be getting out there actively for speaking engagements and outreach to our community. So whether you’re high on AI or hate it, I’d love to talk to you about it. Please get in touch.
Maury Shenk, Founder & CEO, LearnerShape