Close this search box.
Close this search box.

is ChatGPT A Lie?

is ChatGPT A Lie?

Artificial Intelligence & Machine Learning

Review of “ChatGPT is Bullshit” by University of Glasgow‘s Michael Townsen Hicks, James Humphries, and Joe Slater

In their paper “ChatGPT is Bullshit,” Michael Townsen Hicks, James Humphries, and Joe Slater of The University of Glasgow examine into the pervasive inaccuracies found in the outputs of large language models (LLMs) such as ChatGPT.

Moreover, arguing that these inaccuracies should bcomee framed not as “AI hallucinations” but as “bullshit” in the philosophical sense articulated by Harry Frankfurt in his seminal work “On Bullshit” (2005).

This provocative thesis challenges the prevailing discourse around AI misrepresentations and suggests a paradigm shift in how we understand and communicate the behavior of these systems.

Key Arguments

  1. Indifference to Truth:
    • The authors assert that LLMs are fundamentally indifferent to the truth of their outputs. Unlike humans who may unintentionally provide false information due to misperception or ignorance, LLMs generate text without any regard for its veracity. This characteristic aligns with Frankfurt’s notion of bullshit, where the primary aim is not to deceive or tell the truth but to produce a desired impression or outcome.
  2. Dual Definitions of Bullshit:
    • Hicks, Humphries, and Slater distinguish two senses in which LLMs can be considered bullshitters. Firstly, they produce statements without concern for their truthfulness. Secondly, the nature of their operation means they are not genuinely communicative agents; they do not possess beliefs or perceptions. The authors argue that LLMs clearly meet at least one of these definitions, reinforcing their claim that AI-generated falsehoods are better understood as bullshit.
  3. Critique of the ‘Hallucination’ Metaphor:
    • The paper critiques the term “hallucination” as misleading and potentially harmful. Referring to AI inaccuracies as hallucinations can inflate the perceived capabilities of these systems and mislead both the public and experts about the nature of AI errors. The authors contend that this metaphor could lead to inappropriate solutions and misguided efforts in AI alignment, as it suggests a semblance of intention or perception in the machine.
  4. Implications for Communication and Policy:
    • The authors emphasize the importance of accurate terminology in science and technology communication. Mischaracterizing AI errors as hallucinations could lead to poor decision-making by investors, policymakers, and the general public, who often rely on simplified and metaphorical explanations. By adopting the term “bullshit,” the authors believe we can foster a clearer and more realistic understanding of AI capabilities and limitations.

The paper offers a compelling critique of the current discourse surrounding AI inaccuracies. By reframing these inaccuracies as bullshit, the authors provide a more philosophically grounded and practically useful perspective. This shift in terminology has significant implications for how we approach AI development, regulation, and public communication.

One of the paper’s strengths is its clear articulation of the philosophical underpinnings of bullshit and its relevance to AI. By drawing on Frankfurt’s work, the authors provide a robust theoretical framework that enhances the credibility of their argument. Additionally, the paper’s critique of the hallucination metaphor is persuasive, highlighting the potential dangers of overhyping AI capabilities.

However, the paper could benefit from a more detailed exploration of the practical implications of adopting the term bullshit. While the authors make a strong case for the theoretical accuracy of this term, further discussion on how this shift would impact AI development, policy, and public perception would strengthen their argument.

“ChatGPT is Bullshit” presents a thought-provoking and timely critique of the language used to describe AI inaccuracies. By challenging the metaphor of hallucinations and proposing the concept of bullshit, Hicks, Humphries, and Slater offer a valuable contribution to the discourse on AI and its societal implications. Their work encourages a more honest and precise conversation about the capabilities and limitations of large language models, ultimately fostering a more informed and realistic understanding of AI technology.

ChatGPT is bullshit (

is ChatGPT A Lie?