Australia’s Digital Transformation Agency (DTA) has issued interim guidance to public servants regarding the appropriate use of generative artificial intelligence (AI) tools. The guidance sets parameters for tools like ChatGPT, Google Bard, and Bing AI, aiming to support responsible and safe technology use, minimize harm, achieve fair outcomes, and build community trust in emerging technology.
Public servants are advised to assess risks and benefits for each use case, while examples of acceptable use include project planning, generating template slides, and confirming technical requirements. The guidance emphasizes refraining from entering sensitive or classified information into public AI platforms and recommends agencies implement registration and approval mechanisms for staff user accounts. The DTA’s guidance is intended to be iterative, allowing government agencies to implement it within their own organizations.
This move follows similar efforts by other governments, such as the UK Cabinet Office releasing guidance on generative AI tools. A survey conducted by Global Government Forum indicates that over 10% of Canadian public servants have used AI tools like ChatGPT in their work, with interests focusing on processing large amounts of data and real-time analysis of public service delivery. However, concerns about accountability, over-reliance, and lack of understanding hinder wider adoption of AI in public service delivery.