Setting up an AI Strategy: A governance perspective
Key Takeaways from Our Webinar on AI Governance for Competitive Advantage
The rapid expansion of AI across businesses of all sizes underscores the need for responsible AI practices, as running an AI project without governance can significantly increase risk. With artificial intelligence reshaping operations and interactions, organisations must establish frameworks to harness AI’s power responsibly.
Our recent webinar, “AI Governance to Drive Competitive Advantage,” explored the critical steps companies can take to navigate challenges and leverage AI’s benefits while maintaining ethical and legal standards. Here’s a sneak peek of what we covered – sign up to stream the full discussion on demand for deeper insights.
The ‘Say-Do’ Gap in Responsible AI
Kicking off the session, Jane Robinson, COO of Avocado, Jane Robinson addressed a growing concern in AI governance: the disparity between organisations’ intentions and actions in responsible AI use. A staggering 82% of Australian businesses claim responsible AI usage, yet only 24% actually follow best practices. This gap, as Jane noted, highlights the urgency of strong AI frameworks to balance innovation with ethics to not only protect their operations but also secure a competitive advantage.
The risks of running AI without governance – Why AI governance is essential
Nicole Hampel, an AI governance strategist at AI Consulting Group, shared that governance is foundational, stating,
“AI governance is about setting up frameworks, policies, and procedures to ensure AI systems are developed and used ethically, transparently, and in compliance with regulations.”
Without this, organisations risk unintended consequences like bias, privacy breaches, and compliance failures. Nicole stressed that a robust AI governance framework allows businesses to safely innovate, building trust with customers and regulators alike.
Regulatory updates – a global and local perspective on AI Regulation
Globally, countries are beginning to implement regulations to address the ethical use of AI. Nicole provided an overview of the global approach to AI regulation, pointing out notable examples like the EU’s upcoming AI Act, ISO 42001 standards, and the US National Institute of Standards and Technology (NIST) framework. Each focus on transparency and accountability, making AI frameworks essential for any organisation looking to operate responsibly.
In Australia, the regulatory landscape is evolving, with voluntary safety standards already in place and mandatory guardrails for high-risk settings likely on the horizon. As Hampel explained, while there are currently no mandatory legal requirements for AI governance, the Australian government has introduced a voluntary AI Safety Standard with ten clear guardrails that help businesses develop and deploy AI responsibly. Hampel noted,
“Australia’s voluntary safety standard aligns with global best practices and positions businesses to stay ahead of upcoming mandatory frameworks in high-risk settings.”
Critical Components of AI governance
The core elements of effective AI governance, Nicole explained, include ethics, compliance, transparency, and accountability. “Transparency means openly communicating how AI systems make decisions,” Nicole said. “This helps build trust and allows customers and regulators to understand AI’s role in your operations.”
Real-World Examples of AI governance Success
One example shared was an Australian aged care provider that successfully implemented AI governance, allowing them to operate more transparently in a highly regulated industry. Their adherence to best practices has positioned them as a leader, giving regulators confidence in their ethical AI use.
Legal pitfalls of AI adoption – The risks of running without AI
Haylen Pong, Principal at Pongan Legal, emphasised that adopting AI without governance could severely impact a company’s reputation and bottom line. Haylen shared a striking analogy:
“If an organisation tells me they’re implementing AI without governance, it’s like they’re saying they’re willing to increase their risk profile significantly.”
Common pitfalls include privacy infringements, copyright issues, and bias, all of which can lead to fines or litigation.
Complexities of AI in real-world scenarios
Our experts also tackled hypothetical scenarios, examining the nuances of accountability and liability. For instance, what happens when a self-driving car causes an accident or when a diagnostic AI tool provides incorrect medical advice? Haylen explained that these situations highlight the importance of having a governance framework that protects both developers and users from costly repercussions no matter what the use case.
Setting up AI governance in your organisation – the key to innovation and efficiency
While AI governance is often viewed as a compliance measure, it has the potential to drive innovation and efficiency. “Effective AI governance builds trust with customers, stakeholders, and even internal teams,” Hampel explained. “This trust is essential for businesses looking to expand their AI capabilities without fear of legal or ethical repercussions.”
Companies that invest in AI governance from the outset are better positioned to scale their AI initiatives. For organisations further along in their AI journey, retrofitting governance can ensure systems remain compliant with evolving regulations while continuing to drive operational efficiencies.
Nicole and Haylen underscored the importance of proactively setting up AI governance. Nicole suggested assessing current AI capabilities and implementing ISO 42001 standards as a starting point. “A well-designed governance framework supports AI projects of various sizes, from small-scale implementations to enterprise-wide systems,” Nicole said. For companies further along in their AI journey, an audit to identify governance gaps is crucial.
Key questions Boards should be asking
One of the webinar highlights was Haylen Pong’s response to, “What questions should the board be asking management?” Her answer offered a practical roadmap for leaders. Haylen recommended starting with a foundational question:
“If AI, why AI?” This ensures AI aligns with the company’s strategic objectives and isn’t just a tech add-on.
Boards should also consider transparency and accountability by establishing a cross-functional AI team to manage risks effectively. Haylen emphasised the need for clear compliance standards, tailored to the organisation’s goals and resources, and concluded by noting the importance of assessing AI’s financial impact – ensuring a clear understanding of AI’s benefits against its costs.
Watch the Full Webinar
Dive into the full webinar to learn more about establishing AI governance that safeguards innovation while ensuring compliance. Our panellists provide practical steps, valuable resources, and strategies to help organisations navigate the complexities of AI governance. Fill out the form to stream the full webinar.
Ready to get started? Reach out to our team today.