WPP's global AI chief Daniel Hulme says holdco building client 'brand brains' to ‘glue’ data to ChatGPT in bid to make AI powered ads – and first one is months away
Marketers are scrambling to work out how to use AI. WPP may have a solution. It's building "brand brains", according to Chief AI Officer Daniel Hulme – and plans to "glue" those troves of individual brand assets and data to generative AIs such as ChatGPT to create ads and content that doesn't require staff to sit there prompting eternally. The first brain is months away. If it works – Hulme is pretty confident in the concept – a brand brain factory may be the next addition to the multinational holdco's scope of work.
What you need to know:
- WPP AI chief Daniel Hulme thinks the holdco will soon be able to harness AI to build brand assets at scale by “gluing” brand-specific data repositories to generative AI.
- That would mean staff don’t have to sit their prompting large language model systems – which by nature are general – because they are automatically fed context, i.e. brand data including tone, style guides etc.
- The first “brand brain” is months away. If it works, there will likely be more. Many more.
- Hulme thinks those predicting mass job losses from AI don’t actually know what they are talking about – because nobody does.
- Companies are talking up job creation because they “don’t want to sound dystopian”, he acknowledged, but expects that there will be winners and losers “as in any industrial revolution”.
- As marketers – and the world – scrambles to work out how to harness AI, tech bosses and researchers have urged AI labs to pause development on tools more powerful than ChatGPT4.
- They fear that the arms race is already out of control – and there’s a fair bit at stake.
WPP is months away from bringing its first AI-powered “brand brain” to life, with AI expert Daniel Hulme, founder of Satalia, acquired by the holdco in 2021 and wrapped into Wundermann Thompson, bidding to “glue” the brains to generative AI such as ChatGPT in order to make oven ready campaign assets based specifically on brand data.
If the first brain works, WPP could ultimately roll out the technology across its client portfolio.
“I've got no doubt that it will happen,” Hulme told Mi3. “Sam Altman [OpenAI CEO] is also a proponent of this idea and we've got team members that have done this in an academic setting. So we know that it's possible.”
Now WPP just has to make its neurons fire.
Outside context problem
While everyone’s piling into generative AI, the problem with large language models is that “they are general,” said Hulme, who founded Satalia in 2007, going on to work with the likes of Tesco to use AI to optimise distribution and PwC to wring the most out of its auditors, before WPP swooped in late 2021.
“They're trained on lots and lots of data, and so they have kind of general knowledge about the world. To really use them well, they need to be better at reasoning. That will come, but they need to have better context. Either you have to get very, very good at prompting them, or they have to have context.”
To avoid turning WPP’s 120,000 employees into an army of frustrated ChatGPT prompters, the aim is to build “mini brains that are trained on a client’s data set, then gluing that brain with a ChatGPT brain,” per Hulme. “So when a creative says come up with a new campaign that involves a laptop, it knows you are talking about Dell. It uses Dell’s tone of voice, style guide etcetera. So you don’t have to use [additional] prompts to build that context,” he told Mi3.
“So the future is gluing together these specialised brains. WPP is in a very privileged position, because we’ve got lots proprietary data about brands to then create contextualisd outputs rather than generalised outputs. It’s like you’re turning [the mini brains] into experts.”
Hulme indicated the first brand brain is months away, a prospect that has Justin Ricketts, CEO of WPP production unit Hogarth, licking his lips amid soaring demand for dynamic content and thousands of variants of the same ad. Will more brains and brands then follow?
“I think so," said Hulme. "It depends on how successful they are.”
But marketers are clamouring for a solution.
“Everybody is asking us what they should be doing with ChatGPT. Everybody feels they should be using this technology somehow. And the first thing you ask ChatGPT is what you should be doing for automobile [category] or whatever – and it just comes up with some generic stuff. We think that we know how to use this technology to make it relevant for brands.”
Hence the push to start gluing brains together. In the meantime, Hulme said WPP is already “quite mature” in harnessing AI.
“This is not new [for WPP]. What we will see over the next two years is an acceleration of how [AI technologies] are being used to create new content in a much more rapid way and show better success for clients. So I think it's just going to be an acceleration of that whole process,” per Hulme.
“Beyond two years. I want to unlock the creative capacity of the creatives using these technologies. I've got some ideas about how to do that, but right now it's about accelerating all of the things that we're already doing … How do we get content out there and make sure it’s [the right] content?”
Could the brains be used beyond content development and optimisation, for example CX or media planning schedules?
“Where you can see the response of that decision, get the feedback and then get another adaption and where it can be automated is where you’re going to see an adaption of technologies,” per Hulme.
“Adaption is synonymous with intelligence. There is a very good definition of AI which is ‘goal-directed adaptive behaviour’. And the key word is ‘adaptive’. What you want to do is build systems that can adapt themselves, make decisions and learn about those decisions.”
AI taking your job?
Goldman Sachs this week made the latest to attempt to read the auguries on AI’s workplace impact, predicting it could take 300 million jobs. The truth is that “nobody knows whether this is going to take jobs or create jobs,” said Hulme.
“I guess the party line for most organisations is it is going to create jobs because they don’t want to sound dystopian. The fact is that nobody knows. But it is within our gift to create that future. I genuinely believe that architected in the right way, used in the right way, [AI] will just augment and enable people to thrive. These technologies remove friction from the creation and dissemination of goods. But people then go and tend to do more interesting creative things. That's the trend that we're seeing.”
But automating work previously performed by humans will surely lead to some headcount reductions? Can the junior designer once earning a crust by tweaking hundreds of ad iterations become a higher value creative thinker?
“In any industrial revolution, jobs have been displaced, but people have been able to go and retrain and find new jobs. I think in the next ten years we will see more of that,” said Hulme. “I don't think that anybody should be commenting beyond ten years. Nobody should be claiming that there will always be jobs; nobody should be claiming that there's not going to be any jobs.
“But I think that certainly over the next ten years we're going to see a boom in the use of these technologies.”
Fate amendable to change?
The risk of that boom running out of control is starting to worry some very smart people. Tech titans, executives and researchers this week called for a six month pause on work to train AIs more powerful than ChatGPT4.
Per the letter:
Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
It concludes:
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
The letter has subsequently been criticised by some of those whose work it cites.