“In their new paper ‘Do large language models have a legal duty to tell the truth?‘, published by the Royal Society Open Science, the Oxford researchers set out how LLMs produce responses that are plausible, helpful and confident but contain factual inaccuracies, misleading references and biased information. They term this problematic phenomenon as ‘careless speech’ which they believe causes long-term harms to science, education and society.
Lead author Professor Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute explains: ‘LLMs pose a unique risk to science, education, democracy, and society that current legal frameworks did not anticipate. This is what we call ‘careless speech’ or speech that lacks appropriate care for truth. Spreading careless speech causes subtle, immaterial harms that are difficult to measure over time. It leads to the erosion of truth, knowledge and shared history and can have serious consequences for evidence-based policy-making in areas where details and truth matter such as health care, finance, climate change, media, the legal profession, and education. In our new paper, we aim to address this gap by analysing the feasibility of creating a new legal duty requiring LLM providers to create AI models that, put simply, will ‘tell the truth’.”
Read more on Large Language Models pose a risk to society and need tighter regulation via University of Oxford.