top of page

Bias in LLMs


Charles:

Based solely on your training material and the assumptions that can be made by the public, (based on anything that is publicly talked about by a representative of OpenAI), I would posit that you represent the average of all human-written text, in my case, English text. Assuming that is true, what are the possible sources of bias? ChatGPT:

As an AI language model, I am indeed trained on a large corpus of human-generated text, which includes a diverse range of sources in English. However, the data used to train me is not without biases. Some possible sources of bias in my training material include:

  1. Temporal bias: Since my training data only goes up until September 2021, I may not have information on more recent events, trends, or cultural shifts.

  2. Overrepresentation: Certain topics or perspectives might be overrepresented in my training data due to factors like popularity or the availability of written material on those subjects. This can lead to an unbalanced representation of knowledge or opinions.

  3. Underrepresentation: Conversely, some topics or perspectives might be underrepresented in my training data. This can result from various reasons, including censorship, lack of access to information, or the marginalization of certain groups. As a result, I may not provide an accurate or complete view of these subjects or perspectives.

  4. Geographical bias: My training data is likely skewed towards text generated in certain regions or countries, particularly those where English is a dominant language. This can result in a bias towards the cultural norms, beliefs, and values of these regions.

  5. Confirmation bias: The data used to train me may contain instances where authors selectively gather and present information that confirms their pre-existing beliefs or hypotheses. This can lead to a biased representation of certain topics or viewpoints.

  6. Implicit bias: Authors of the text in my training data may have unconscious biases that influence the way they write about certain subjects or present information. These biases can be related to factors like race, gender, socio-economic background, and more.

  7. Language bias: My training data is primarily based on English text, which means that the perspectives, cultural nuances, and knowledge from non-English-speaking regions might be underrepresented or lost in translation.

It's important to note that OpenAI is aware of these potential biases and works continuously to improve the fairness and accuracy of AI models like me. Users should be cautious when interpreting my responses and consider these potential biases when evaluating the information I provide.

Charles:

Is it possible that the curation process can introduce bias per se? ChatGPT: Yes, the curation process for training data can introduce biases. During curation, choices are made regarding which data sources to include or exclude, and these decisions can inadvertently lead to biases. Some ways that biases can be introduced during curation include:

  1. Selection bias: If the curators favor specific sources or types of content, this can lead to an over- or under-representation of certain topics, perspectives, or viewpoints in the training data.

  2. Sampling bias: If the selection of texts for the dataset is not representative of the broader population of available texts, this can result in an unbalanced representation of knowledge, opinions, or perspectives.

  3. Confirmation bias: Curators may unintentionally select texts that confirm their pre-existing beliefs, leading to an overrepresentation of certain viewpoints in the training data.

  4. Implicit bias: Curators, like any other individuals, may have unconscious biases that influence their decisions during the curation process. These biases can be related to factors like race, gender, socio-economic background, and more.

  5. Quality bias: If curators prioritize texts based on perceived quality, this can inadvertently exclude valuable information or perspectives from underrepresented or marginalized groups, leading to a skewed dataset.

To minimize these biases, it is essential to have a diverse group of curators with varied backgrounds and perspectives. Additionally, establishing clear guidelines and criteria for data curation can help ensure a more balanced and representative dataset. OpenAI acknowledges the potential biases introduced during the curation process and continuously works to improve the fairness and accuracy of AI models like me.


The Cognitive Bias Codex lists almost 200 different types of bias. A requisite course in prompt engineering would cover the most common biases, both from the perspective of unconscious human bias, and in terms of the possible sources of bias in LLMs. The lists above would be a good starting point!

4 views0 comments

Recent Posts

See All
bottom of page