By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
AmextaFinanceAmextaFinance
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
AmextaFinanceAmextaFinance
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
AmextaFinance > Investing > AI Safety Is a Real Problem. The U.K.’s New Summit Won’t Do Enough.
Investing

AI Safety Is a Real Problem. The U.K.’s New Summit Won’t Do Enough.

News Room
Last updated: 2023/11/01 at 10:21 AM
By News Room
Share
7 Min Read
SHARE

About the author: Susan Ariel Aaronson is a professor at the George Washington University, where she directs the Digital Trade and Data Governance Hub and is co-PI of the NIST-NSF Trustworthy AI Institute for Law and Society.

In 1951, the British mathematician Alan Turing predicted that one day computers that could “think like humans” would be widely used. He also acknowledged that computer intelligence would arouse wonder as well as fear because “we cannot always expect to know what the computer is going to do.”  

Some 72 years later, both of Turing’s predictions have come true. Artificial intelligence systems are everywhere—in our  homes, schools, media, and government.  Moreover, as he warned, people have contradictory views about AI. Most people understand that AI is a tool that can help individuals, corporations, and governments solve complex problems. But equally, many people are concerned that the same technology could threaten their jobs, their futures, democracy, human rights, and even human existence.   

As the home of many of the world’s leading developers of AI, the British government decided it was uniquely positioned to address some of these concerns. Prime Minister Rishi Sunak announced last summer that the U.K. would host the world’s first AI safety summit. The event will be held Wednesday and Thursday at Bletchley Park and in London. Bletchley Park is where Turing and his team used an early computer to break German codes in the dark days of World War II. 

Despite the symbolism of the location and British good intentions, the summit is unlikely to adequately address peoples’ fears about AI. That’s true for several reasons. 

First, the British government has sought to limit the focus of AI safety to one broad category of AI, called “frontier AI.” The government defines frontier AI as highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. U.K. officials argued that they focused on such models because they can serve many sectors of the economy, from defense to education to industry. Moreover, many leading AI researchers and companies explicitly aim to build AI agents whose capabilities can exceed those of humans. 

The  U.K. government has said that the conference will seek specific recommendations on four key problems: the risks to the planet of AI “misuse,” including threats to biosecurity and cybersecurity; the risks from unpredictable “leaps” in frontier AI; the risks from a loss of control such as computers going berserk; and finally risks from the integration of frontier AI into society such as election disruption.  

However, academics and representatives of civil society have said this focus seems to miss some of the most important problems involving AI today. They want the AI Safety Summit to focus on immediate dangers from AI systems already in wide use. For example, cities and airports use facial recognition systems to identify terrorists and criminals in order to protect individuals from harm. But the widespread use of those systems raises serious concerns about unwarranted surveillance and discrimination. 

Observers are also concerned about the organization of the conference. The U.K. government plans two days of talks between “leading AI companies, civil society groups and experts in research.” Participants will spend the first day in multisectoral discussions about frontier AI. On day 2, Sunak “will convene a small group of governments, companies and experts to further the discussion.” Though two days hardly seems sufficient to forge a consensus, the government is already planning to move ahead with international talks on these issues.

Third, the conference is not very representative of the world’s people or even the country hosting the conference. The 100 attendees are undeniably knowledgeable about AI and are active in devising ways to govern it. But if British policy makers genuinely want to understand and address AI safety risks, they must hear from a broader cross section of people. AI systems are often designed and deployed in an opaque manner. Individuals may struggle to understand how these systems make decisions and thus, they are unlikely to trust these processes. Several studies, including my own, have shown that public involvement and a full feedback loop are essential to building public trust in AI, which in turn can help people see these systems as relatively safe to deploy. 

Finally, the summit is not scheduled to address the business model underpinning many corporate variants of AI. As scholar Shoshana Zuboff and others have shown, many platforms rely on “surveillance capitalism.”  These firms provide their users with free online services in return for their personal data, which the firms then can monetize. Many of these platforms use that data to fuel several types of AI, which in turn they use to predict and influence our behavior. Put differently, these firms often manipulate their users to maximize revenue. The British people saw the dangers of this business model first hand during the Brexit campaign, when social media campaigns drove political polarization. 

The British deserve credit for trying to address the rising risks of AI. But as Turing noted, we can’t always know what computers will do. That’s all the more reason to focus on the very real problems already being posed by current variants of AI.

Guest commentaries like this one are written by authors outside the Barron’s and MarketWatch newsroom. They reflect the perspective and opinions of the authors. Submit commentary proposals and other feedback to ideas@barrons.com.

Read the full article here

News Room November 1, 2023 November 1, 2023
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Germany backs Donald Trump goal for Nato to spend 5% of GDP on defence

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

How Zara’s unorthodox Russian exit left it primed for a return

Within days of Vladimir Putin’s 2022 full-blown invasion of Ukraine, Zara owner…

Oil prices fall on Donald Trump’s Iran deal comments

Unlock the White House Watch newsletter for freeYour guide to what Trump’s…

Can Bill Ackman really create a ‘modern-day’ Berkshire Hathaway?

Two days after Warren Buffett announced his retirement as chief executive of…

The papal call for debt relief that might not be needed

Unlock the White House Watch newsletter for freeYour guide to what Trump’s…

- Advertisement -
Ad imageAd image

You Might Also Like

Investing

Why Home Builders Are Bouncing Today—and Why Their Stocks Are Good Buys

By News Room
Investing

This Beaten-Down Industrial Stock Wants to Call America Home. Why It’s Time to Buy.

By News Room
Investing

These 8 Dividend Aristocrats Can Protect Your Portfolio in a Downturn

By News Room
Investing

Some Lenders Benefit From SBA’s Troubled Loan Program

By News Room
Investing

Social Security Is in Turmoil. Should You Lock In Benefits Now?

By News Room
Investing

Hims & Hers Stock Is Due for a Crash Diet. The GLP-1 Surge Is Fading Fast.

By News Room
Investing

Opinion: The stock-market selloff isn’t over yet. Here are 4 reasons why.

By News Room
Investing

With Trump’s tariffs paused, ‘Big Three’ automakers may race to build inventories

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

YOUR EMAIL HAS BEEN CONFIRMED.
THANK YOU!

Welcome Back!

Sign in to your account

Lost your password?