In a concerning development, researchers at the Oxford Internet Institute have issued a warning about the growing tendency of Large Language Models (LLMs) used in chatbots to fabricate. These models are designed to generate human-sounding responses with no guarantees regarding their accuracy or alignment with factual data.
According to a paper published in Nature Human Behaviour, LLMs are typically treated as knowledge sources and used to provide information on demand. However, the data they are trained on may not always be accurate or trustworthy. This is because LLMs often rely on online sources that can contain false statements, opinions, and misinformation. Users tend to view LLMs as credible and reliable sources of information due to their human-like design, which can lead them to believe that responses are accurate even when they lack any basis in reality or present a biased representation of truth.
The researchers emphasized the critical role of scientific accuracy in science and education and urged the scientific community to use LLMs responsibly by treating them as “zero-shot translators.” This means that users should provide the model with relevant data and ask it to transform it into an output that aligns with the input, rather than relying on the model itself for information. By taking this approach, users can verify that the output is trustworthy and aligned with factual data.
Despite their potential benefits for scientific workflows, it is crucial for scientists to exercise caution when using LLMs and maintain realistic expectations of their capabilities. With proper usage and careful consideration of their limitations, LLMs can undoubtedly contribute positively to scientific research and advancements.