Table of contents
Australia is on the cusp of a major shift in how artificial intelligence (AI) is regulated. As the European Union’s AI Act sets a new global benchmark, legal and regulatory bodies in Australia are signalling similar moves. While Australia hasn’t yet passed AI-specific legislation, the writing is on the wall: businesses must act now to bring their AI systems into alignment with emerging expectations on transparency, fairness, privacy, and accountability.
The conversation about AI is no longer just about innovation, it’s about governance and risk management. The regulatory tide is turning; smart businesses will get ahead of it. That means putting legal, ethical, and operational frameworks in place now before a regulator, or journalist starts asking hard questions.
Why EU-Style AI Regulation Matters in Australia
The EU AI Act categorises AI systems by risk and imposes strict obligations on providers and users of “high-risk” AI such as those used in employment, finance, healthcare, and law enforcement. Systems must be transparent, subject to oversight, and demonstrably non-discriminatory. Penalties for non-compliance are steep.
While Australia is still evaluating its own AI governance model, existing Australian law, especially privacy, discrimination, and consumer protection law already applies to many AI use cases. And that’s before anticipated reforms are implemented.
There’s a misconception that because there’s no ‘AI law’ in Australia yet, businesses are free to do what they like. That’s dangerously wrong. AI that mishandles data, perpetuates bias, or misleads consumers can already land a company in hot water.
Step One: Conduct an Internal AI Audit
The first step for any business using or planning to use AI is to know exactly where and how it’s being deployed.
You can’t manage risk if you don’t know where it is. Start by identifying every process where AI or algorithmic decision-making is used whether that’s in hiring, customer service, marketing, or credit risk assessment. An AI audit should include:
- A catalogue of all AI tools used or developed
- Documentation on data sources, training processes, and model design
- Clarity on who built the systems (in-house, vendors, third parties)
- Review of system outputs for potential bias or discrimination
- A legal assessment of potential breaches of existing laws
For many companies, this may require coordination between legal, IT, HR, and compliance departments.
Aligning AI Use with Privacy and Anti-Discrimination Law
AI systems often rely on massive amounts of personal data, which means Australia’s Privacy Act is front and centre. With reforms to the Act already underway including stricter rules around automated decision-making and consent businesses must scrutinise how data is collected, processed, and shared. AI tools trained on biased data or producing discriminatory outcomes can also expose companies to liability under Australia’s anti-discrimination laws. Bias in AI isn’t just a technical issue, it’s a legal one. If your algorithm disadvantages certain groups, you could be in breach of the Sex Discrimination Act, the Equal Opportunity Act, or other legislation. We should consider:
- Whether the AI could indirectly discriminate against protected groups
- How individuals can contest decisions made by automated systems
- What processes are in place for ongoing bias monitoring
- Whether AI outputs are explainable and transparent
Cross-Generational Training and Culture Building
Legal compliance is essential, but it’s only half the picture. Effective AI governance also depends on company culture and that means empowering people at every level to understand and question AI.
We often see a generational divide when it comes to digital literacy. Younger staff may be more comfortable using AI tools, while older staff bring a critical, cautious perspective. Both are valuable, but both need training. Clients should design training programs that are accessible and tailored across different roles and age groups which focus on:
- What AI is (and isn’t)
- The ethical and legal responsibilities of AI users
- How to spot risks like bias, misinformation, or data misuse
- Reporting pathways for employees to raise AI concerns
Transparency and defensibility start with people, if staff are using AI without understanding the risks, that’s a liability waiting to happen.
Building Defensible and Transparent AI Use Cases
At the heart is documenting everything. If regulators or courts come knocking, your ability to show that your AI practices were deliberate, monitored, and aligned with existing law can be the difference between a manageable inquiry and a reputational disaster.
Here are Burch’s Top Tips for Defensible AI Deployment:
- Establish Clear Governance
Assign accountability for AI systems, ideally at the executive level. Develop policies that govern how AI is selected, tested, deployed, and retired.
- Maintain Thorough Documentation
Keep records of how models are trained, what data is used, what decisions they make, and why. Version control, audit trails, and change logs are critical.
- Implement Human Oversight
Even if decisions are made by AI, ensure there’s always a human who can intervene or explain outcomes especially in high-risk areas like hiring.
- Perform Regular Risk Assessments
Risk doesn’t end when the system is deployed. Regular reviews should test for bias, data drift, and unintended consequences.
- Create a Transparency Strategy
Be clear with customers, employees, and stakeholders about when AI is used and how decisions are made. Explainability isn’t just ethical, it’s increasingly a legal requirement.
With AI use accelerating and regulatory momentum building, businesses that fail to act now could find themselves scrambling later. This is about more than ticking compliance boxes; it’s about earning trust and building resilience. We’re entering an era where trust in AI is going to be a competitive advantage. The companies that succeed will be those who can prove they use AI responsibly, fairly, and transparently. Whether your organisation is already using AI or just starting to explore it, now is the time to prepare.
Last updated: 23 July 2025