- Establishes transparency and accountability as a foundation.
- Search reviews from Google, Meta, OpenAI, Microsoft
LONDON (Reuters) – Britain on Monday set out principles designed to prevent artificial intelligence (AI) models from being dominated by a handful of technology companies to the detriment of consumers and businesses, by emphasizing the need for responsibility and transparency.
Britain’s antitrust regulator, the Competition and Markets Authority (CMA), is, like other authorities around the world, trying to control some of the potential negative consequences of AI without stifling innovation.
The seven principles it lists aim to regulate fundamental models like ChatGPT by holding developers accountable, preventing big tech from tying up technology in their walled platforms, and stopping anti-competitive behavior like bundling.
CMA chief executive Sarah Cardell said on Monday there was real potential for technology to boost productivity and make millions of everyday tasks easier, but a positive future could not be taken for granted.
He said there was a risk that the use of AI could be dominated by a few actors exercising market power that prevents the full benefits from being felt across the economy.
“That’s why today we proposed these new principles and launched a broad engagement program to help ensure that the development and use of basic models evolve in a way that promotes competition and protects consumers,” he said.
The CMA’s proposed principles, which come six weeks before Britain hosts a global summit on AI safety, will underpin its approach to AI as it takes on new powers in the coming months to oversee digital markets.
It said it would now seek opinions from leading AI developers, such as Google, Meta, OpenAI, Microsoft, NVIDIA and Anthropic, as well as governments, academics and other regulators.
The proposed principles also cover access to key inputs, diversity of business models, including open and closed, and flexibility for companies to use multiple models.
In March, Britain opted to split regulatory responsibility for AI between the CMA and other bodies that oversee human rights and health and safety rather than create a new regulator.
The United States is studying possible rules to regulate AI, and digital ministers from the Group of Seven major economies agreed in April to adopt “risk-based” regulation that would also preserve an open environment.
Reporting by Paul Sandle and Sarah Young, editing by Kylie MacLellan and David Evans
Our standards: The Thomson Reuters Trust Principles.