Back to blog
20 April 2026AI Compliance

Australia's AI Rules Are Changing in December 2026 — Here's What Your Business Needs to Do

Quick Answer

From December 10, 2026, Australian businesses must explain automated decisions that affect people — including whether AI is involved and what data it uses. The small business exemption is gone. Non-compliance penalties reach $50 million. If you use AI in any customer-facing process, you need to act before December.

Most Australian business owners have no idea this is coming. The Privacy Act 1988 is getting its biggest overhaul in decades, and it directly affects every business using AI — including yours.

This is not a future concern. The reforms take effect on December 10, 2026. That is eight months away. If you are using AI in any part of your business that touches customer data, you need a plan.

Here is what is actually changing, what counts as an automated decision, and exactly what you need to do to be ready.

What's actually changing

The Privacy Act 1988 amendments take effect on December 10, 2026. Here is what matters for businesses using AI.

Organisations must explain automated decisions. If your business makes decisions using AI or automation that affect people, you must be able to explain whether AI is involved and which personal data is being used. This applies to any decision made without meaningful human involvement.

The small business exemption is being removed. Previously, businesses with less than $3 million in annual turnover were exempt from most Privacy Act obligations. That exemption is going. Nearly every Australian business will now be covered — not just the large ones.

Penalties are severe. Non-compliance can attract penalties of up to $50 million AUD or 30% of adjusted turnover for the relevant period, whichever is greater. These are not theoretical numbers. They are designed to make non-compliance far more expensive than compliance.

Dec 10, 2026

Deadline for compliance

$50M

Maximum penalty

$3M

Exemption removed

Every business

Now covered

What counts as an automated decision

An automated decision is any decision made without meaningful human involvement that affects someone. If software or AI is making the call — or heavily influencing it — it counts.

This is broader than most people realise. It is not limited to fully autonomous systems. If AI is doing the analysis and a human is just rubber-stamping the result, that likely qualifies.

Examples of automated decisions

  • AI scoring job applicants
  • Automated loan or insurance approvals
  • AI-generated compliance reviews
  • Chatbots making recommendations based on customer data
  • Lead scoring and automated follow-up sequences
  • AI grading or assessment systems

If you are unsure whether something in your business qualifies, it probably does. The safest approach is to treat any AI-assisted process that affects a person as an automated decision and apply the requirements accordingly.

What you need to do

This is the practical part. Here is a compliance checklist you can start working through today. You do not need a lawyer to begin — most of this is operational.

AI compliance checklist

  • Audit every AI tool and automation in your business
  • Document what personal data each system processes
  • Add clear disclosures where AI makes or influences decisions
  • Build human review processes for high-stakes decisions
  • Update your privacy policy to cover AI and automated decisions
  • Train your team on the new requirements
  • Review third-party AI tools for compliance (their problem is your problem)
  • Set up audit trails and logging for automated decisions

Start with the audit. You cannot fix what you do not know about. List every AI tool, chatbot, automation, and algorithm in your business. Include third-party tools — if you are using a CRM with AI lead scoring, that counts. If your support tool uses AI to suggest responses, that counts too.

Document the data flows. For each system, write down what personal data it receives, what it does with it, and what decisions it makes or influences. This does not need to be a legal document. A clear spreadsheet is a good start.

Add disclosures. Wherever AI makes or influences a decision about a person, tell them. This means clear labels on chatbots, notifications in automated emails, and explanations in any AI-driven assessment or scoring system.

Build in human review. For high-stakes decisions — hiring, lending, access to services — there needs to be a genuine human review step. Not a rubber stamp. A real checkpoint where a person can override the AI.

Set up audit trails. Every automated decision should be logged. What data went in, what the AI decided, and when. If a regulator asks you to explain a specific decision, you need to be able to pull that up.

The most common compliance gaps

We work with Australian SMBs every day. These are the gaps we see most often — and the ones that will cause the most problems after December.

Using ChatGPT or AI tools with customer data and no disclosure. This is the most common one. Staff paste customer emails, enquiries, or records into AI tools to draft responses. The customer has no idea their data is being processed by AI. Under the new rules, this needs to be disclosed.

AI chatbots making recommendations without flagging they're AI. If a customer interacts with a chatbot that recommends products, services, or next steps based on their data, the customer needs to know it is AI. A simple “AI-assisted” label goes a long way.

Automated lead scoring with no human oversight. If your CRM scores leads using AI and automatically routes or prioritises them, that is an automated decision. If it affects how someone is treated — whether they get a call back, how quickly they get a response — it needs human review for high-stakes cases and logging across the board.

No audit trail for AI-generated decisions. Most businesses have no logging for what their AI systems decide. When the regulator asks “why did your system make this decision about this person?” you need an answer. “We don't know” is not going to cut it.

Third-party tools processing data overseas without adequate safeguards. If you use AI tools hosted overseas — and most are — you need to ensure adequate data protection is in place. Data Processing Agreements are the bare minimum.

GapRisk LevelFix
No AI disclosure in customer-facing toolsHighAdd clear "AI-assisted" labels
No audit trail for automated decisionsHighImplement logging on all AI systems
Third-party AI tools without DPAsMediumReview and sign Data Processing Agreements
No human review for high-stakes decisionsHighAdd human-in-the-loop checkpoints
Privacy policy doesn't mention AIMediumUpdate privacy policy before December

How to build AI systems that are compliant from day one

Compliance is cheaper to build in than to retrofit. Bolting on audit logging, disclosure labels, and human review after the fact is messy and expensive. Building them in from the start is straightforward. It is a design decision, not an afterthought.

Every AI system we build at AI-DOS includes audit logging, clear AI disclosure, and human escalation pathsas standard. Not because we predicted these reforms — because it is the right way to build AI systems. Transparency and accountability should not be optional extras.

If you are building new AI automations, build them right from the start. The marginal cost of including compliance features during the build is a fraction of retrofitting them later.

If you already have AI tools running in your business, audit them now — not in November. The businesses that start early will have time to fix issues properly. The ones that wait until Q4 will be scrambling.

If you need help assessing where your current systems stand or building new ones the right way, that is exactly what our AI consulting and strategy service covers.

The honest verdict

This is an opportunity, not just a risk

This is not about fear. It is about doing things properly. The businesses that prepare now will have a competitive advantage — customers trust companies that are transparent about AI. The ones that ignore this will scramble in Q4 or face real penalties. Compliance is not a burden if you build it in from the start. It is a signal that you take your customers seriously.

The Privacy Act reforms are real, the deadlines are firm, and the penalties are substantial. But this does not have to be painful.

If you are already using AI responsibly — with clear disclosures, proper logging, and human oversight where it matters — you are most of the way there. You just need to formalise it and update your documentation.

If you have been moving fast and not thinking about this, now is the time. Eight months is enough to get compliant. Four months will feel rushed. Two months will be expensive.

Start now. Audit what you have. Fix the gaps. Build new systems the right way.

People also ask

When do Australia's new AI rules take effect?

The Privacy Act amendments take effect on December 10, 2026. Businesses should start preparing now — waiting until Q4 will make compliance significantly harder and more expensive.

Does the small business exemption still apply for AI?

No. The small business exemption (previously shielding businesses under $3M annual turnover) is being removed under the new reforms. Nearly every Australian business will be covered, regardless of size or revenue.

What are the penalties for non-compliance with AI regulations in Australia?

Penalties reach up to $50 million AUD or 30% of a company's adjusted turnover for the relevant period, whichever is greater. These are among the strictest data privacy penalties globally.

Related reading

Is AI Automation Worth It? ROI Breakdown for Australian SMBs— The honest costs, returns, and what determines whether AI automation pays off.

How to Use AI in Your Business— A practical guide to getting started with AI the right way.

Need help getting your AI systems compliant?

We build AI automation with compliance baked in from day one. If you need an audit of your existing systems or want to build something new the right way, let's talk.

Book a strategy session
Aidan Lambert

Aidan Lambert

Founder, AI-DOS

Aidan is the founder and lead automation architect at AI-DOS. He personally builds every system the agency delivers — from architecture to production handover.

More about AI-DOS