To break through the adoption barrier, Australia's biggest companies must lead the charge on Gen AI governance
Generative AI (Gen AI) is reshaping industries at a breakneck pace, unlocking new efficiencies, driving innovation, and transforming how businesses operate. While startups are often seen as the fast-moving adopters of AI, it is large corporations that must play a critical role in leading the charge on Gen AI governance.
Australia's biggest bank, Commonwealth Bank of Australia (CBA) improved operational efficiency by implementing AI-powered customer service, handling over 500,000 interactions monthly and significantly reducing wait times, while the nation's biggest Telco, Telstra enhanced data-driven decision-making through AI, reducing network outages by 30 per cent with predictive maintenance and improving overall service reliability.
Despite the clear benefits of AI, yet many corporates remain cautious about full-scale adoption. A 2024 Salesforce report highlighted that while 75 per cent of Australian businesses are either using or planning to use AI, only 25 per cent feel confident in their governance practices. This gap between ambition and preparedness is a significant concern, as governance frameworks are essential for minimizing risks associated with AI deployment.
Larger companies often face internal barriers, such as slow decision-making processes, siloed operations, and a lack of in-house AI expertise, which can hinder the swift adoption of AI governance. However, the cost of inaction is high. Corporations that delay implementing governance frameworks risk falling behind more agile competitors, facing regulatory penalties, or worse—experiencing reputational damage from poorly governed AI systems.
Why Startups Are Moving Faster
In contrast to corporates, AI startups are moving quickly, experimenting with AI technologies and rapidly deploying solutions. Entrepreneurs like Annie Liao, founder of Build Club (an AI community for engineers, researchers & founders), are taking advantage of their agility to innovate at speed, often without the same governance concerns that weigh down larger enterprises.
Harrison.ai uses AI to assist radiologists in diagnosing medical scans, analyzing over 1 million images in partnership with I-MED, driving significant healthcare improvements.
Relevance AI enables companies to leverage vector-based AI search for better data insights, attracting over 400 customers globally within 18 months of launch.
Hnry automates tax filings and financial admin for freelancers, growing to 10,000 Australian users in just two years, and securing $35 million in Series B funding.
However, without the resources and structure that corporates have, startups may overlook the long-term risks that come with unchecked AI development.
While startups can drive innovation, it’s corporates that must lead the way in setting the governance standards that ensure AI is used safely and ethically across industries. By
collaborating with startups, corporates can leverage their agility while providing the necessary oversight and resources to scale AI responsibly.
Corporates not only have the resources and influence to implement robust governance frameworks, but they are also the key players in setting the ethical, transparent, and accountable standards for the entire AI ecosystem.
Recently, the Australian government launched its updated **AI Standards Framework**, which provides clear guidance on how businesses—especially larger enterprises—can develop, implement, and monitor AI systems in a responsible manner. While startups are often highlighted for their agility in adopting new technologies, corporates must recognize that governance is not just about compliance. It’s about building trust, ensuring long-term sustainability, and protecting against the potential risks that come with AI deployment.
Key considerations
There are three very clear reasons why corporates must take AI governance seriously.
The first is the need to protect brand reputation and trust**: Large companies face greater scrutiny than their smaller counterparts. One misstep in AI governance can lead to public relations disasters, legal repercussions, and a significant loss of consumer trust.
Facebook (Meta) faced backlash in 2020 when its AI algorithm for content moderation disproportionately flagged posts from minority groups, leading to accusations of bias and public distrust.
Amazon scrapped its AI-based hiring tool in 2018 after it was found to discriminate against female candidates, damaging the company's reputation for diversity and inclusion.
Apple came under fire in 2019 when its AI-driven credit card algorithm was accused of gender bias, offering lower credit limits to women, which led to investigations and widespread criticism.
Corporations that invest in strong governance frameworks can protect their brand reputation and differentiate themselves as responsible AI leaders in the market.
Next, there is the need to mitigate regulatory risks. Governments worldwide, including Australia, are tightening regulations around AI. The EU AI Act, introduced in 2021, categorizes AI applications by risk level and imposes strict regulations on high-risk uses like facial recognition and healthcare AI systems. This framework is designed to ensure transparency, fairness, and ethical AI use across sectors. Local media has extensively covered its potential impact on businesses, particularly in sectors dealing with sensitive data and consumer safety.
In 2022, China introduced its deepfake regulations, which require all AI-generated content, such as deepfakes, to be clearly labeled, and place strict rules on AI service providers to prevent misuse, including illegal impersonation or manipulation of identities. These rules came into force in early 2023 and are considered some of the strictest globally, aimed at curbing misinformation and safeguarding national security.
By embedding the principles outlined in the AI Standards Framework—fairness, transparency, accountability, and security—corporates can future-proof their operations against stricter regulatory requirements. Failing to act now could result in rushed compliance later, leading to costly disruptions and potential penalties.
Finally, ethical AI is a competitive advantage. Corporations have a unique opportunity to use ethical AI as a differentiator. Consumers and business partners are increasingly seeking out companies that prioritize responsible technology use. By embedding ethical principles into AI systems, corporates can create competitive advantages, attract forward-thinking clients, and secure long-term market leadership. [
Perspectives
At our upcoming InnovAItor event, Dr. Kendra Vant and Lee Hickin will provide valuable insights into the importance of Gen AI governance from both a theoretical and practical standpoint, offering a clear roadmap for corporate leaders.
Dr. Kendra Vant, AI Mentor for Boards & Startups (ex-Xero & Seek), stresses the importance of governance at scale: “For corporates, governance isn’t a nice-to-have—it’s a necessity. AI technologies have incredible potential, but they must be deployed thoughtfully. It’s about ensuring that the systems we build today don’t create problems for tomorrow.”
Lee Hickin, Head of AI Policy at Microsoft Asia, adds that the role of governance is crucial for larger enterprises seeking to scale AI responsibly. “Corporates are in a unique position to set the tone for responsible AI use across industries. By adopting governance practices early, they not only mitigate risks but also position themselves as leaders in ethical innovation. The challenge isn’t just in scaling AI, but in scaling it the right way.”
Both Vant and Hickin will discuss how corporates can navigate the complexities of AI governance while remaining at the forefront of innovation.
So what are the actions corporates should be taking right now;
- Implement the Australian AI Standards Framework**: Corporates should start by aligning their AI strategies with the recently launched AI Standards Framework, focusing on the principles of fairness, transparency, and accountability. This will not only protect against future regulatory risks but also build trust with stakeholders.
- Invest in Internal AI Expertise**: Developing in-house AI expertise is critical for corporates looking to implement AI governance effectively. This includes training teams on AI ethics, hiring AI specialists, and creating cross-functional governance committees to oversee AI deployment.
- Build Collaborative Partnerships**: Corporations should consider forming strategic partnerships with startups to accelerate AI innovation. By combining the agility of startups with the governance frameworks of larger enterprises, both parties can drive value while minimizing risks.
Next steps
AI governance is no longer optional—it is a fundamental requirement for corporations looking to lead in the age of AI. By adopting strong governance practices now, corporates can mitigate risks, build consumer trust, and ensure they remain competitive in an increasingly AI-driven world.
With the support of frameworks like Australia’s AI Standards and guidance from industry leaders like Dr. Kendra Vant and Lee Hickin, corporates have a clear path to responsible AI deployment. The time for hesitation has passed—now is the time for action.