7.5 Ethical, Energy, and Societal Considerations
Generative AI is powerful—but it’s not without a hidden cost. Training and running these models require huge amounts of energy. As writer Claire D. Costa explains, “from power-hungry data centers to the enormous carbon footprint of AI training, machine learning is draining global energy resources at an alarming rate.”1 Think about it this way: training something like GPT-3 alone can consume enough electricity to power 100 homes for a year, and cooling data centers? That can swallow millions of gallons of water annually, putting pressure on local supplies. Microsoft and Google have already acknowledged these sustainability pressures and have announced investments in renewable energy and water-efficient data centers as a result (Microsoft, 2024; Google, 2024). 2 Keeping an eye on these issues reminds marketers to balance innovation with stewardship.
Beyond energy use, there are deeper questions about creativity and authenticity. Generative AI can create remarkably human-like content—from song lyrics to product slogans—yet it’s not drawing on emotions or personal experience the way a person would. A great example is Coca‑Cola’s “Create Real Magic” campaign described in Section 7.4, which invited artists worldwide to co-create artwork using AI tools like DALL·E and GPT-4..3 The results were visually stunning, but Coca‑Cola was careful to emphasize the human–AI partnership, making sure the artists’ names appeared alongside AI as part of the process. That kept the focus on creativity as a collaboration, not automation for its own sake.
There are also serious ethical questions that can’t be ignored. AI algorithms often reflect the data they’re trained on—and when that data lacks diversity, so do the outputs. For instance, Amazon famously had to scrap its AI-driven hiring tool after it learned to discriminate against women because the training data reflected mostly male résumés.4 Similarly, Bloomberg’s investigation found that image generators like Stable Diffusion not only reflect real-world stereotypes—they amplify them. For example, when prompted to generate images of professionals like doctors or CEOs, these tools disproportionately produced white male figures.5 That kind of skew carries real-world consequences for what audiences see and believe. We’ve seen similar issues with text-based AI: a study showed ChatGPT favored names typically associated with Asian women and undervalued those linked to Black men in résumé evaluations.6 These kinds of examples remind us that AI can perpetuate bias unless we actively check its assumptions and intervene.
And then there’s the question of authenticity. AI can mimic empathy or creativity, but it doesn’t truly understand or feel those things. Without our thoughtful guidance, AI might generate a perfectly optimized campaign that still feels flat or “off,” especially in sensitive areas like health or social issues. That’s why marketing teams need to keep the human touch front and center. The most powerful messages will come from marketers who leverage AI to do the heavy lifting—pulling data, generating drafts, spotting trends—but who also add their own judgment, empathy, and creativity.
Click here to view a list of available activities.