Many of us avoid mentioning our gender in prompts, knowing that ChatGPT might respond in a biased way. But how do we eliminate these biases by not mentioning gender at all?
In our research, we prompted ChatGPT to provide financial advice, and we didn’t mention the advice-seeker's gender. Instead, as recommended in prompt engineering methods, we mentioned the advice-seeker's profession, using stereotypically gendered professions.
Results were astonishing: financial recommendations given to more "feminine" professions were less risky, more prevention-oriented, and more simplified and patronizing in tone and wording than advice given to more "masculine" professions.
We named this phenomenon "implicit bias in LLMs," which referred to bias caused by supplying "naïve" information from which group affiliation might be inferred. Such implicit biases are crucial to study, especially since they are harder to detect and debias than the more "explicit" biases. As users, we should at least raise awareness of those biases, since they might affect ChatGPT’s responses.
Link to our paper >>