On 21 October 2024, the Office of the Australian Information Commissioner (OAIC) published guidance on privacy and the use of commercially available AI products. The guidance emphasises that organisations and government agencies must comply with privacy obligations when using AI systems that handle personal information. This applies to both the data input into the AI and any output generated that contains personal information. Organisations should conduct thorough due diligence when selecting AI tools, ensuring they are suitable for their intended use, and consider privacy and security risks, as well as human oversight. Transparency is critical, and privacy policies should be updated to clearly inform users about the use of AI, especially for public-facing tools like chatbots. Furthermore, if AI systems generate or infer personal information, it must be treated as such, including outputs such as deepfakes. Organisations must ensure that any use of personal information aligns with its original purpose or obtain consent for secondary uses. A "privacy by design" approach is recommended, including conducting Privacy Impact Assessments (PIAs) and regularly reviewing the AI’s performance and data handling. Moreover, the guidance advises against inputting sensitive personal information into publicly available AI tools due to high privacy risks. Accuracy is another important consideration, as AI can produce inaccurate results. Organisations are expected to ensure that personal information is accurate and use measures such as disclaimers or watermarks to manage potential risks associated with AI-generated data.
Original source