Skip to main content
An evolving AI project from Mi3 | Automation with Editor curation. And oversight. Always.
In partnership with
Salesforce
Posted 02/09/2024 8:08am

Pic: Midjourney

Editors' Note: Many Fast News images are stylised illustrations generated by Dall-E. Photorealism is not intended. View as early and evolving AI art!

hAIku

AI's vast frontier,
Risks and rewards intertwined,
Guidance is key here.

In partnership with
Salesforce

Partner Op-Ed: BYO-AI is turning data security into the wild, wild west

By Leandro Perez, Senior Vice President and CMO APAC, Salesforce

 

I’ve seen the excitement of my peers about how generative AI (GenAI) can shave hours off routine tasks and boost productivity.

But what if that enthusiasm led to accidentally sharing sensitive company data with a public AI model? Or publishing AI-generated content that turned out to be inaccurate or completely false?

With GenAI, the risks are real and the stakes are high.

It feels like just yesterday (well, for me anyway!), when employees started bringing their own iPhones and iPads to work, forcing companies to scramble to implement new policies and tighten security.

Now, we’re facing a similar, yet far more challenging scenario: employees with BYO-AI at work. Without proper guardrails, this could put your brand at significant risk.
 

The eagerness to experiment: A double-edged sword

Everyone, from the C-suite to frontline employees, is eager to harness the power of GenAI. The fear of being left behind and the measurable productivity benefits are driving a wave of experimentation.

At Salesforce, we see this enthusiasm daily. From my conversations with peers, it’s clear businesses are exploring AI’s potential at breakneck speed.

Recent Salesforce research shows a surge in AI experimentation with 53 per cent of Australian professionals actively using or experimenting with generative AI at work.

Yet, this excitement can lead to significant risks, especially regarding data privacy and security, and trust.

AI use among desk workers has soared by 23 per cent since January and 60 per cent since last September, according to Slack’s Workforce Index. However, nearly 2 in 5 of these workers say their company has no AI usage guidelines.

This gap between AI use and company policy can create a “wild, wild west” environment where anything goes — including risky behaviour.

We’ve all heard the news. A developer at an electronics company pasted proprietary code into ChatGPT which led to a complete ban on AI tools like ChatGPT at the company.

And then there’s the issue of hallucinations, where AI generates content that sounds plausible but is entirely false. A lawyer recently cited fake cases produced by ChatGPT, resulting in a major professional embarrassment.

Even worse, a chatbot fabricated a refund policy that a major airline was forced to honour, which further highlights how easily AI can create legal and financial chaos when it goes off-script.

As businesses rush to experiment with GenAI, it’s crucial to implement strong AI usage guidelines to prevent these risks and ensure that the benefits of AI don’t come at too high a cost.
 

Data privacy and security: The fine print of GenAI

When we input sensitive customer data or proprietary strategies into public AI models like ChatGPT, we’re risking more than just a data breach — we’re putting years of hard-earned customer trust on the line.

Beyond privacy, data governance is equally important. If our data ends up in the wrong hands, the consequences could be severe — think hefty fines and long-lasting damage to our brand’s reputation. Plus, there’s the chance that today’s data could train AI models that inadvertently benefit our competitors.

The potential of GenAI is vast, but as marketing leaders, it’s our responsibility to ensure that our enthusiasm for innovation doesn’t come at the cost of our customer’s trust.
 

Setting the right guardrails for GenAI

At Salesforce, we recognised these risks early and developed the Einstein Trust Layer, which secures and anonymises data while preventing future leaks. But right technology alone isn’t enough — governance is key.

Here’s what CMOs can do to set the right GenAI governance frameworks:

  • Pilot and test: Before deploying AI tools widely, run internal pilots to ensure they meet your security standards.
  • Vet and approve: Be diligent in vetting AI tools and providers, choosing only those that are secure and align with your company’s values. Ensure your teams only access approved products with built-in protections like the Einstein Trust Layer.
  • Set clear guidelines: Establish clear policies on AI use to prevent unauthorised access and misuse.
  • Train your teams: Equip your teams to think critically and act as human guardrails, catching potential AI errors before they cause harm.

The potential of GenAI is palpable, but it’s up to us to guide our teams through this transformation.

Set the vision, establish the guidelines, and provide the tools and training they need to use AI safely and effectively. By doing so, we can harness AI’s power while protecting our business and maintaining customer trust.

Take a read of our AI Strategy Guide for more on emphasising trust in your AI approach.
 

Search Mi3 Articles