About the author: Susan Ariel Aaronson is a professor at the George Washington University, where she directs the Digital Trade and Data Governance Hub and is co-PI of the NIST-NSF Trustworthy AI Institute for Law and Society.
In 1951, the British mathematician Alan Turing predicted that one day computers that could “think like humans” would be widely used. He also acknowledged that computer intelligence would arouse wonder as well as fear because “we cannot always expect to know what the computer is going to do.”
Some 72 years later, both of Turing’s predictions have come true. Artificial intelligence systems are everywhere—in our homes, schools, media, and government. Moreover, as he warned, people have contradictory views about AI. Most people understand that AI is a tool that can help individuals, corporations, and governments solve complex problems. But equally, many people are concerned that the same technology could threaten their jobs, their futures, democracy, human rights, and even human existence.
As the home of many of the world’s leading developers of AI, the British government decided it was uniquely positioned to address some of these concerns. Prime Minister Rishi Sunak announced last summer that the U.K. would host the world’s first AI safety summit. The event will be held Wednesday and Thursday at Bletchley Park and in London. Bletchley Park is where Turing and his team used an early computer to break German codes in the dark days of World War II.
Despite the symbolism of the location and British good intentions, the summit is unlikely to adequately address peoples’ fears about AI. That’s true for several reasons.
First, the British government has sought to limit the focus of AI safety to one broad category of AI, called “frontier AI.” The government defines frontier AI as highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. U.K. officials argued that they focused on such models because they can serve many sectors of the economy, from defense to education to industry. Moreover, many leading AI researchers and companies explicitly aim to build AI agents whose capabilities can exceed those of humans.
The U.K. government has said that the conference will seek specific recommendations on four key problems: the risks to the planet of AI “misuse,” including threats to biosecurity and cybersecurity; the risks from unpredictable “leaps” in frontier AI; the risks from a loss of control such as computers going berserk; and finally risks from the integration of frontier AI into society such as election disruption.
However, academics and representatives of civil society have said this focus seems to miss some of the most important problems involving AI today. They want the AI Safety Summit to focus on immediate dangers from AI systems already in wide use. For example, cities and airports use facial recognition systems to identify terrorists and criminals in order to protect individuals from harm. But the widespread use of those systems raises serious concerns about unwarranted surveillance and discrimination.
Observers are also concerned about the organization of the conference. The U.K. government plans two days of talks between “leading AI companies, civil society groups and experts in research.” Participants will spend the first day in multisectoral discussions about frontier AI. On day 2, Sunak “will convene a small group of governments, companies and experts to further the discussion.” Though two days hardly seems sufficient to forge a consensus, the government is already planning to move ahead with international talks on these issues.
Third, the conference is not very representative of the world’s people or even the country hosting the conference. The 100 attendees are undeniably knowledgeable about AI and are active in devising ways to govern it. But if British policy makers genuinely want to understand and address AI safety risks, they must hear from a broader cross section of people. AI systems are often designed and deployed in an opaque manner. Individuals may struggle to understand how these systems make decisions and thus, they are unlikely to trust these processes. Several studies, including my own, have shown that public involvement and a full feedback loop are essential to building public trust in AI, which in turn can help people see these systems as relatively safe to deploy.
Finally, the summit is not scheduled to address the business model underpinning many corporate variants of AI. As scholar Shoshana Zuboff and others have shown, many platforms rely on “surveillance capitalism.” These firms provide their users with free online services in return for their personal data, which the firms then can monetize. Many of these platforms use that data to fuel several types of AI, which in turn they use to predict and influence our behavior. Put differently, these firms often manipulate their users to maximize revenue. The British people saw the dangers of this business model first hand during the Brexit campaign, when social media campaigns drove political polarization.
The British deserve credit for trying to address the rising risks of AI. But as Turing noted, we can’t always know what computers will do. That’s all the more reason to focus on the very real problems already being posed by current variants of AI.
Guest commentaries like this one are written by authors outside the Barron’s and MarketWatch newsroom. They reflect the perspective and opinions of the authors. Submit commentary proposals and other feedback to [email protected].
Read the full article here