DFAT is dedicated to identifying and adopting emerging technologies, including artificial intelligence (AI), to enhance secure business outcomes. We embrace new opportunities while recognising the uncertainties and risks that come with technological change. This public transparency statement outlines DFAT’s approach to AI.
Safe and responsible use of AI is a guiding principle for AI use in DFAT. We comply with relevant legislation, regulations, guidance, and apply best practice where applicable.
Our approach to AI
DFAT uses both automation and artificial intelligence (AI) to strengthen our operations and deliver better outcomes. We follow the Organisation for Economic Co-operation and Development (OECD) definition of AI, in line with the Policy for the responsible use of AI in government. At DFAT, "AI" refers to applications of machine learning, deep learning, and generative AI. Rule-based analytics and rule-based automation are managed separately and are not considered AI under this definition.
Our people are central to the safe and effective use of AI at DFAT. We are committed to equipping our staff with the skills and knowledge needed to implement, maintain, and use AI responsibly. Comprehensive training ensures our workforce can confidently manage their use of AI, uphold ethical standards, and support secure business practices.
By combining advanced technology with a skilled and informed workforce, DFAT aims to harness the benefits of AI while maintaining the highest standards or safety, transparency, and accountability.
Based on the classification system for AI in government, we are using AI in:
- analytics for insights
- workplace productivity
- image processing.
We are applying AI in the following domains:
- service delivery
- compliance and fraud detection
- policy and legal
- corporate and enabling.
AI safety and governance
DFAT has internal governance for AI, covering every stage from initial idea to implementation. Our processes ensure:
- AI is implemented and used safely and responsibly
- continuously monitoring how AI performs
- meeting all legal and regulatory obligations
- identifying risks and potential negative impacts
- taking action to reduce or prevent harm.
Each AI use case is assigned an Accountable Use Case Owner to oversee its management. Risks are assessed before implementation and reviewed regularly, with governance processes updated as needed to remain effective. DFAT only uses AI in ways that protect security, privacy, transparency, and ethical use, always maintaining human oversight (‘Human-in-the-Loop’).
All staff must complete training on the responsible use of AI before they are granted access. Staff are encouraged to raise questions or report concerns through a dedicated internal channel.
Currently, the Chief Information Officer is responsible for ensuring DFAT complies with whole-of-government AI policies and maintains the quality and integrity of departmental data.
This transparency statement will be updated as our approach evolves, and at least every twelve months.
| Update Date | Update Comment |
|---|---|
| 29 November 2024 | Designation of Accountable Officials |
| 06 February 2025 |
|
| 3 December 2025 |
|