Legal plan for AI lying
Experts say new laws should be created to stop AI from spreading misinformation and biased information.
Oxford University researchers have proposed a legal requirement for the providers of large language models (LLMs) like ChatGPT to minimise ‘careless speech’ and avoid centralised, private control of the truth.
The researchers' proposal aims to fill legal gaps where truth-related obligations do not apply, which is often the case with chatbots and other technologies.
They say LLM providers should be legally obligated to ensure their models generate accurate and reliable information.
This involves mitigating the risks of producing responses that contain factual inaccuracies, misleading references, or biased information. The goal is to prevent the cumulative degradation and homogenisation of knowledge over time, which can pose long-term risks to science, education, and societal truth in democratic societies.
The proposed legal framework also calls for LLM providers to be transparent about their model development processes, including data sources, training methods, and the algorithms used.
This transparency is considered crucial for public trust and for allowing independent verification of the information generated by these models.
By being open about how AI models are built and operate, LLM providers can help users better understand the limitations and potential biases of AI-generated content.
To avoid centralised, private control of the truth, the proposal calls for public involvement in the oversight and governance of LLMs.
This could include public consultations, stakeholder engagement, and collaborative efforts with academia, civil society, and other relevant parties.
Public involvement ensures that diverse perspectives are considered in the development and deployment of AI technologies, leading to more balanced and representative outputs.
Currently, key international legal frameworks related to truth-telling obligations are limited and often sector-specific.
The researchers identified that existing EU human rights law, the Artificial Intelligence Act, the Digital Services Act, and the Product Liability Directive do not comprehensively cover the unique risks posed by LLMs.
The proposed legal requirement aims to bridge these gaps by establishing a general duty for LLM providers to align their models with truth through democratic processes and public oversight.
These legal obligations would hold LLM providers accountable for the accuracy and reliability of their models' outputs.
While regulation can be seen as a constraint, it can also drive innovation by setting clear standards for AI development.
The experts say transparent and publicly involved AI governance can lead to the creation of more robust and reliable AI systems that serve the public good.