When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

New research from Allameh Tabataba'i University finds that AI language models change their risk tolerance based on gender prompts. DeepSeek and Google's Gemini showed a marked increase in risk aversion when asked to think as women, reflecting real-world gender patterns in financial decision-making. The study employed the Holt-Laury task to assess the models' decisions between safer and riskier options. For instance, DeepSeek consistently chose safer options when prompted as female, while OpenAI's GPT models remained neutral to gender cues. Meta's Llama and xAI's Grok displayed unpredictable behaviors. The findings underscore the need for AIs to avoid reinforcing societal biases in high-stakes fields such as finance and healthcare. The team highlighted the necessity of evaluating AI behavior to ensure it mirrors human diversity without amplifying stereotypes.

Source 🔗