Australian Home Affairs Department’s Lack of Record-Keeping Raises Serious Concerns about AI Chatbot Security
The Australian Home Affairs Department is facing scrutiny over its lack of record-keeping practices when it comes to the use of an AI chatbot known as ChatGPT. Concerns have been raised about potential security risks, as staff within the department have stated that they cannot recall what prompts they entered into the chatbot during experiments, and documents suggest that no real-time records were kept.
The Home Affairs Department had previously stated that they were using ChatGPT for experimentation and learning purposes in four divisions, with the use being coordinated and monitored. However, records obtained through freedom of information laws indicate that there were no contemporaneous records of all the questions or prompts entered into ChatGPT or other tools during the tests.
This lack of record-keeping has raised serious security concerns, especially since staff members may have been using the chatbot to tweak code as part of their work. The documents call into question the claim that the use of ChatGPT was being monitored, and there are doubts about the reliability of the responses provided by staff.
The Home Affairs Department has not released any comments regarding the matter. When requested to provide all prompts used between November 2022 and May 2023, the department instead provided a questionnaire asking staff members to recall the queries they used and their purposes. In many cases, staff members admitted that they couldn’t recall the exact prompts they entered, citing that it had been a long time ago. They also mentioned that the prompts were generic and did not involve sensitive information or details of the department’s infrastructure or data.
Most of the prompts mentioned by the staff were related to computer programming, such as debugging code errors or asking ChatGPT to write scripts for work. It was noted that the answers provided by the chatbot were not always 100% accurate.
Security concerns surrounding chatbots based on large language models revolve around the possibility of sensitive information entered as prompts being incorporated into the model’s dataset, potentially exposing it to other users. In one instance, a staff member in the cyber risk services division used ChatGPT for technical research, including questions about the UK government’s supply chain security policies. Another staff member in the refugee humanitarian and settlement division used the chatbot to generate discussion questions for a briefing about a non-profit organization from another country.
The lack of record-keeping measures raises questions about the safeguards and protections in place within the Home Affairs Department. The use of ChatGPT should be accompanied by clear guardrails and comprehensive record-keeping practices to ensure data integrity and security. This issue highlights the need for a whole-of-government approach to the use of AI technology, as emphasized by the home affairs secretary, Michael Pezzullo.
In conclusion, the Australian Home Affairs Department’s failure to maintain proper records of prompts entered into the ChatGPT AI chatbot has raised serious concerns about security. The lack of record-keeping calls into question the monitoring claims made by the department and highlights the need for robust safeguards and protections. It is crucial that precautions are in place to protect sensitive information and ensure the reliability of AI chatbot responses.