Here’s the thing—when AI image generation platforms use the images that already exist on the internet as reference points, what it creates is even less diverse and even more ‘perfected’ than the most photoshopped images. The images that are created using AI image generation lack diversity—they are thin and completely devoid of ‘imperfections’. No pores, no wrinkles, no cellulite. If all we start to see in images and videos are ‘perfect-looking’ people, this has the power to shift social norms, expectations and standards of bodies and beauty—all over again.
A global authority on AI, Nina Schick, predicted that by 2025, 90% of online content will be generated by AI. For a glimpse of what this could look like, watch the new Dove campaign, ‘The Code’.
Given the decades of research that shows the negative psychological impact of exposure to ‘idealised’ images in traditional and social media, we know better than to let generative AI go unchecked—but what can we do?
Should we label AI-generated images?
For a long time, people have called for the labelling of photoshopped images. It makes logical sense—the images aren’t real, so let’s tell people that, so they know not to compare themselves to it. People campaigned for labelling, companies embedded policies for labelling and France and Norway even made it illegal to use photoshopped images without a label. The problem? Labelling the images as retouched, edited and photoshopped doesn’t stop them having a negative effect. Researchers Marika Tiggemann, Jasmine Fardouley and others conducted studies looking at all of the possible ways that consumers could be warned that an image wasn’t real, and none of them made people feel any better about their bodies or appearance.
So, is labelling AI-generated images any different? According to a report from Getty Images, 9 in 10 consumers think that brands should have to disclose whether an image has been generated by AI. There haven’t been as many studies on the efficacy of labelling AI images as there are on labelling digitally-altered images. One study has found that labelling AI-generated images was effective in ensuring that consumers were less likely to believe and share AI-generated misinformation online. However, research is yet to investigate the impact of the labelling of AI-generated images on body dissatisfaction.
How can we protect our kids?
Yes, all of this is scary, but there is a lot that we can do.
1. Put on your ‘critical’ glasses: First, let’s start with increasing our own awareness, and then their awareness, of the images that might be AI-generated—starting with some fun (and more obvious) options. You might have seen videos of babies walking down a catwalk dressed in fast food, or my personal favourite—miniature donkeys in matching scarf and boot sets. Watching these with your kids, having a bit of a giggle, and talking about whether these could possibly be real or not, is a good way to get started. Plus, they are really, really cute.