| Abstract |
This project investigates whether popular AI chatbot systems treat users differently based on demographic information. By testing identical prompts with only the user's stated race, ethnicity, and gender changed, the experiment measures differences in response length, tone, assumptions made, and recommendations provided. The goal is to determine if AI systems exhibit systematic bias that could perpetuate stereotypes or provide unequal service to different demographic groups, particularly as these technologies become increasingly integrated into education and daily decision-making. |