Scarlett Johansson’s fight with OpenAI’s ‘eerily similar’ Skye voice typifies marketer’s rising trust concerns over LLM data and prompt security; AI vendors must come clean, says Salesforce Marketing Cloud CMO Bobby Jania
Marketing teams around the world are both leading AI adoption in their companies but are equally restraining its deployment over increasing concerns about the security and scraping of their prompts and data to keep the large language models (LLMs) of AI developers learning and getting smarter. Salesforce Marketing Cloud CMO Bobby Jania says the Scarlett Johansson furore with Open AI two weeks ago over the likeness of her voice in the unveiling of “Skye”, typifies every conversation he has with a marketing team: it starts with concerns and questions of “where their data goes, who is going to have access to it, who learns from it, who trained off it”. The opaqueness on what data LLMs are ingesting is proving a boon for Salesforce and its position on retaining no customer prompts or data on any LLM’s plugged into its various cloud products. “The reality is right now it's a differentiator for us because we're able to talk about the fact that our business is not our customer’s data at all,” says Jania. “It should be table stakes for the industry. For a lot of solutions out there, their preference would be to continue to train an LLM using the data that goes in, which could be that customer data. And from who I talk to, every customer is very concerned about where their data goes”.
What you need to know:
- Marketer concerns over AI platforms raiding their data and prompts to feed large language model (LLM) learning are among the biggest roadblocks for AI deployment, says Salesforce Marketing Cloud CMO Bobby Jania.
- It echoes what marketers from Carnival's Princess Cruise Lines and Care Pharma told Mi3 last week during an AI forum put on by This Is Flow.
- Salesforce took an early position on AI and trust, using its size to strike deals with AI firms that blocked them using Salesforce customer data if they were plugged in to those systems via the tech giant.
- That position is swinging deals in favouir of Salesforce, said Jania, but it shouldn’t. For the tech to be trusted, industry at large had to be open and transparent.
- Per Jania: “For [tech and AI] companies to be successful, they're going to have to talk about it because I have not had a conversation with a customer that they have not wanted to ask me every question about that.”
Smart company
Salesforce global Marketing Cloud CMO Bobby Jania says Scarlett Johansson’s stoush with OpenAI on the likeness of its new voice prompt Skye to Johansson’s is directly connected to some of the biggest reservations from marketers on deploying AI – who taps and turns their data inside those big AI powerhouses?
Salesforce has been one of the earliest to turn AI trust into a competitive strike and it’s working, said Jania, in defusing the single biggest upfront resistance from marketing teams around the world: will company data and prompts be poached to inform the LLMs of any AI service they’re considering to deploy?
Salesforce’s scale has given it leverage in dictating terms with multiple LLMs which its customers can use – only those that agree to its “zero prompt retention” policy can plug and play via what it calls the “Einstein trust layer” – essentially a gateway to LLMs with data that a company has in any of Salesforce cloud products.
Big beats LLMs
“We are big enough that we've been able to get the LLM partners to agree to zero prompt retention,” he said. “So the whole idea for how we're going to work this is one of our customers is going to write a prompt, the value of that prompt is then rounded with data they have in Salesforce but then none of that is going to be retained by the LLM.”
That seemingly simple position on data security is so rare across AI and tech firms at present that it is converting business – and it shouldn’t.
“It should be table stakes [for industry] and the reality is right now it's a differentiator for us,” Jania said. “Our goal is to be open and have our customers choose who [which LLM] they want to use, but use it in a safe way. For a lot of [AI] solutions out there, their preference would be to continue to train an LLM using the data that goes in, which could be that customer data.”
Jania said it’s also one of the high-priority, recurring questions from marketing teams around LLM selection – how, what and on who’s data is it being trained? Salesforce’s position on blocking LLMs from ingesting customer data and prompts in increasingly the X-factor in landing contracts, he said.
“I've had conversations with leaders where initially the company's stance was just ‘we will not work with an LLM. We don't want our data to get out there’. And we have sat down to walk through with leaders exactly what we do and we've changed the conversation.”
Ultimately, Jania said AI companies and Salesforce rivals will have to accept transparency and data security – that is, no data scrapes, no leakage – is table stakes.
“For a lot of companies to be successful, they're going to have to talk about it, because I have not had a conversation with a customer that they have not wanted to ask me every question about that; how are the LLMs trained and what happens to the data. So I just don't see how they're going to get business with customers – because that's all I've been talking to customers about.”