Creativity in an AI world: Alex Peacock (RAPP)
Read insights from creative experts across a spectrum of disciplines, and discover how they're using generative AI as a supplement, not a substitute, for creativity.
Alex Peacock heads up Technology Strategy and Transformation for RAPP across EMEA. He has 20 years of experience transforming customer experiences and business outcomes through technology. This technologist's outlook gives him a unique perspective in this interview as he approaches the question of generative AI with an expert’s eye.
Toby: What do you find exciting about generative AI?
Alex: As a technologist, what really excites me about generative AI today is the maturity into Enterprise solutions and the potential of multi-modal models. When we think of Generative AI today, we can broadly place it in two buckets: generative text solutions, also called large language models (LLMs), like ChatGPT, and generative imagery - think diffusion models like Midjourney.
Both LLMs and diffusion models require access to large training datasets, and to date, that’s meant the public internet or collations of publicly available content – like Microsoft Common Objects in Context, Visual Genome, or Flickr30k. However, whether a piece of content is publicly available is different from whether you have permission to use it, and so, understandably, businesses have been cautious about using publicly trained generative AI – at least until tested in law (here’s looking at you the New York Times).
In response to these concerns, we’ve seen traditional Enterprise software businesses build generative models using proprietary data – Adobe, Microsoft, Getty, and Facebook. Removing the risk around copyright gives brands the confidence to use generative tools, and throughout 2024 we’ll see an acceleration in experiences driven by generative AI, all be it with human oversight. I can’t wait to see what we do with the capability.
So, what are multi-model generative models, and why am I also excited about these? Most current AI systems, whether generative or not, are developed for a single purpose – take a text prompt and provide a text or image output, or take a consumer interaction and make a next-best-action decision. Multi-modal systems take a range of inputs from various sources – data, image, text, audio, video – allowing for a deeper understanding of context, closer to how a person might experience the world.
From a marketing perspective, it means we’ll be able to deliver experiences that reflect an individual’s true experience of a brand – what they’ve seen, what they’ve heard, what they’ve read. This richness of context, married with generative content, will enable us to create human, interactive, responsive, and conversational experiences across any channel.
Toby: Nobody wants their role to be completely done by AI, but where do you think generative AI could help creatives in your agency to get the most out of their creativity?
Alex: So, we could look at this with a sense of doom and gloom, but we should see this as an opportunity to throw off the shackles of daily content production and reconnect with our creative spark.
No one became a creative to resize an image or write an email subject line.
Content production tools will automate repetitive, low-value tasks; while multi-modal techniques will allow us to put ourselves in the shoes of the consumer and experience their journeys.
Armed with more time to focus on the idea and powered by richer consumer insights, our creatives will be free to push the boundaries again and remember why they joined the industry in the first place.
Toby: Do you think generative AI needs regulation - and how does that idea connect with creativity?
Alex: Absolutely. We’re seeing the impact of fake content and deepfakes today, and it’s going to get worse as the technology improves. As a brand, how do you manage authenticity in the face of misinformation? As a creator, how do you control someone leveraging your creativity without recognizing your contribution? As a consumer, how do you protect yourself from malicious content?
The challenge, as with all disruptive technologies, is our current legal systems, often designed more than a hundred years ago, aren’t fit for purpose to keep pace with the speed of evolution.
How this plays out in the creative space will be interesting. We don’t want to limit creativity through unwieldy regulation, but creators themselves need their products protected.
Toby: Although generative AI makes it easier and more accessible to be creative, do you think there’s a diluting effect on that creativity because of how accessible it is?
Alex: Not at all. Creativity has never been about the tools. I could design you an enterprise architecture framework, but I’ve never had the skill to create an award-winning creative idea. To quote Edward de Bono, “Creative thinking is not a talent; it’s a skill that can be learned”. Access to a growing set of generative tools and techniques is only going to empower the best creatives to push boundaries further, building the next generation of customer experiences.
Toby: So, is it easier or harder to be creative now compared with 5 years ago, with generative AI creating a sort of simulated creativity so easily?
Alex: Not one for the data and tech person to answer, but working with our creative teams over the last 5 years, I’ve seen how generative AI has allowed them to bring rich creative ideas to life more quickly. That’s not to say the creative process itself has become easier, creating human connections between brands and consumers remains a challenge, but the ability to create more, more quickly, has allowed teams to better test, articulate, and validate their vision.