From masters of the digital universe to pariah figures peddling a machine-dominated dystopia. Well, perhaps that’s not quite the journey that AI developers have been on, but in the last few months the debate around the benefits and risks associated with artificial intelligence tools has intensified, fuelled in part by the arrival of Chat GPT on our desktops. Against this backdrop, the U.K. government has published plans to regulate the sector. So what will this mean for startups?
In tabling proposals for a regulatory framework, the government has promised a light touch, innovation-friendly approach while at the same time addressing public concerns.
And startups working in the sector were probably relieved to hear the government talking up the opportunities rather than emphasising the risks. As Science, Innovation and Technology Minister, Michelle Donelan put it in her forward to the published proposals: “AI is already delivering fantastic social and economic benefits for real people – from improving NHS medical care to making transport safer. Recent advances in things like generative AI give us a glimpse into the enormous opportunities that await us in the near future.”
So, mindful of the need to help Britain’s AI startups – which collectively attracted more than $4.65 billion in VC investment last year – the government has shied away from doing anything too radical. There won’t be a new regulator. Instead, the communications watchdog Ofcom and the Competitions and Market Authority (CMA) will share the heavy lifting. And oversight will be based on broad principles of safety, transparency, accountability and governance, and access to redress rather than being overly prescriptive.
A Smorgasbord of AI Risks
Nevertheless, the government identified a smorgasbord of potential downsides. These included risks to human rights, fairness, public safety, societal cohesion, privacy and security.
For instance, generative AI – technologies producing content in the form of words, audio, pictures and video – may threaten jobs, create problems for educationalists or produce images that blur the lines between fiction and reality. Decisioning AI – widely used by banks to assess loan applications and identify possible frauds – has already been criticized for producing outcomes that simply reflect existing industry biases, thus, providing a kind of validation for unfairness. Then, of course, there is the AI that will underpin driverless cars or autonomous weapons systems. The kind of software that makes life-or-death decisions. That’s a lot for regulators to get their heads around. If they get it wrong, they could either stifle innovation or fail to properly address real problems.
So what will this mean for startups working in the sector. Last week, I spoke to Darko Matovski, CEO and co-founder of CausaLens, a provider of AI-driven decision making tools.
The Need For Regulation
“Regulation is necessary,” he says. “Any system that can affect people’s livelihoods must be regulated.”
But he acknowledges it won’t be easy, given the complexity of the software on offer and the diversity of technologies within the sector.
Matovski’s owncompany, CausaLens, provides AI solutions that aid decision-making. To date, the venture – which last year raised $45 million from VCs – has sold its products into markets such as financial services, manufacturing and healthcare. Its use cases include, price optimisation, supply chain optimisation, risk management in the financial service sector, and market modeling.
On the face of it, decision-making software should not be controversial. Data is collected, crunched and analyzed to enable companies to make better and automated choices. But of course, it is contentious because of the danger of inherent biases when the software is “trained” to make those choices.
So as Matovski sees it, the challenge is to create software that eliminates the bias. “We wanted to create AI that humans can trust,” he says. To do that, the company’s approach has been to create a solution that effectively monitors cause and effect on an ongoing basis. This enables the software to adapt to how an environment – say a complex supply chain – reacts to events or changes and this is factored into decision-making. The idea being decisions are being made according to what is actually happening in in real time.
The bigger point, is perhaps that startups need to think about addressing the risks associated with their particular flavor of AI.
Keeping Pace
But here’s the question . With dozens, or perhaps hundreds of AI startups developing solutions, how do the regulators keep up with the pace of technological development without stifling innovation? After all, regulating social media has proved difficult enough.
Matovski says tech companies have to think in terms of addressing risk and working transparently. “We want to be ahead of the regulator,” he says. “And we want to have a model that can be explained to regulators.”
For its part, the government aims to ensourage dialogue and co-operation between regulators, civil society and AI startups and scaleups. At least that’s what it says in the White Paper.
Room in the Market
In framing its regulatory plans, part of the U.K. Government’s intention is to complement an existing AI strategy. The key is to offer a fertile environment for innovators to gain market traction and grow.
That raises the question of how much room there is in the market for young companies. The recent publicity surrounding generative AI has focused on Google’s Bard software and Microsoft’s relationship with Chat GPT creator OpenAI. Is this a market for big tech players with deep pockets?
Matovski thinks not. “AI is pretty big,” he says. “There is enough for everyone.” Pointing to his own corner of the market, he argues that “causal” AI technology has yet to be fully exploited by the bigger players, leaving room for new businesses to take market share.
The challenge for everyone working in the market is to build trust and address the genuine concerns of citizens and their governments?
Read the full article here