October 19, 2025 – In a controversial new trend, AI-generated images of extreme poverty, children, and sexual violence survivors are being increasingly used in global health campaigns, particularly by NGOs and organizations working on issues like hunger and human rights. This phenomenon, often referred to as “poverty porn 2.0,” has raised ethical concerns among experts who warn that these images perpetuate harmful stereotypes and contribute to disrespectful representation of vulnerable populations.
What Happened
A new study by Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, highlights the growing use of AI-generated images to depict the suffering of marginalized groups, including refugees and victims of sexual violence. These images, which can be found on stock photo sites like Adobe Stock Photos and Freepik, show scenes such as children huddled in muddy water or an African girl in a wedding dress with a tear on her cheek. The study reveals that foreign and stereotypical visuals have become increasingly common, with the media often using AI-generated imagery to portray sensitive global issues such as hunger and abuse.
While these images may be cheaper and more convenient to use, Alenichev and others have raised concerns that the portrayal of poverty and violence through AI tools is both harmful and disrespectful, contributing to a distortion of reality in global health communications.
The Problem with AI-Generated Imagery
Noah Arnold, from Fairpicture, an organization focused on ethical imagery in global development, expressed his concern that AI-generated visuals are being used in place of real photographs of individuals affected by poverty or violence. “Some organizations are actively using AI imagery, and others are experimenting with it,” Arnold explained. This practice is becoming more prevalent, driven by cost concerns and the lack of consent required for using real people’s images.
AI-generated images of poverty are often exaggerated and highly racialized, showing stereotypical depictions of struggling communities. Alenichev describes these images as part of a visual grammar that perpetuates negative stereotypes. For example, phrases like “photorealistic kid in refugee camp” or “Caucasian volunteer provides medical consultation to young black children” are frequently used to describe these images, raising questions about their impact on public perception.
The Ethics of Representation
The growing trend of using AI to represent marginalized communities in global health communications has led to debates over the ethics of such depictions. Kate Kardol, an NGO communications consultant, expressed concern about the continued use of “poverty porn” in both real and digital forms. “It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal,” Kardol said.
In the past, global health organizations have faced scrutiny for sensationalizing poverty and violence in an effort to garner donations or attention. The new use of AI-generated images only exacerbates the issue, as these images can further distort reality, making it easier to perpetuate harmful stereotypes.
Concerns Over AI Bias and Misinformation
The rise of AI-generated content has also raised concerns about the biases embedded in these images. Generative AI tools often replicate and amplify existing societal prejudices, particularly in the representation of non-Western countries and communities. These images are not only inaccurate but could also contribute to the reinforcement of stereotypes in future AI models, as these images may be used to train the next generation of AI systems, perpetuating bias and prejudice on a larger scale.
Plan International, which had previously used AI-generated images in a 2023 campaign against child marriage, now advises against depicting individual children using AI-generated visuals. They argue that such imagery is used to safeguard privacy but acknowledges that it can result in negative consequences for representation, ultimately distorting public understanding of the real issues.
Moving Forward: Ethical Guidelines for AI Use
Despite these challenges, some organizations are taking steps to address the ethical concerns raised by AI imagery. Alenichev suggests that greater editorial diversity is needed in newsrooms and NGO communications departments to ensure that marginalized communities are represented with dignity and accuracy. Noah Arnold also emphasizes the need for organizations to prioritize authenticity in their visual content and avoid opting for convenient but potentially harmful AI-generated imagery.
The debate over AI-generated content in global health communications is far from settled. However, experts agree that ensuring ethical representation and avoiding exploitation of vulnerable populations should remain a priority in all forms of media and advocacy.
This story may be updated with more information as it becomes available.
