Despite concerns around privacy, an increasing number of employees in the United States are turning to ChatGPT, a chatbot program developed by OpenAI, to assist with their work tasks. According to a Reuters/Ipsos poll, 28% of respondents said they regularly use ChatGPT at work, even though only 22% stated that their employers explicitly allowed the use of external tools like ChatGPT.
ChatGPT, powered by generative AI, allows users to engage in conversations and receive answers to various prompts. However, security firms and companies have raised concerns about potential leaks of intellectual property and strategic information due to the program. Some examples of using ChatGPT for work tasks include drafting emails, summarizing documents, and conducting preliminary research.
The poll also revealed that 10% of respondents stated that their bosses explicitly prohibited the use of external AI tools. Additionally, approximately 25% of respondents were unsure if their companies permitted the use of such technology. This growing trend of employees using ChatGPT for work tasks has prompted Microsoft and Google to restrict its use.
OpenAI, the developer of ChatGPT, declined to comment on individual employees using the program but assured corporate partners that their data would not be used to train the chatbot further without explicit permission. The concern surrounding generative AI services is that users often don’t fully understand how their data is utilized. This lack of contractual agreements between users and many AI services poses a risk for businesses, as traditional assessment processes may not cover these free services.
Privacy watchdogs in Europe have criticized OpenAI for its mass data collection practices. Human reviewers from other companies may read the conversations generated by ChatGPT, and researchers have discovered that similar AI systems can reproduce the absorbed data, raising concerns about the security of proprietary information.
While some companies, such as Coca-Cola and Tate & Lyle, are exploring the use of ChatGPT while prioritizing security, others have banned its use entirely. Samsung Electronics, for instance, implemented a temporary ban after an employee uploaded sensitive code to the platform.
As this trend continues to evolve, the balance between harnessing the potential of AI for productivity and ensuring data security remains a challenge. While some employees find ways to utilize ChatGPT for harmless tasks, companies must carefully evaluate the risks associated with these AI tools and implement measures to safeguard sensitive information. Maintaining a cautious approach is crucial to avoid potential breaches and information leaks.
In summary, the increasing popularity of ChatGPT among employees for work tasks, despite privacy concerns, raises questions about data security and intellectual property protection. Employers must strike a balance between benefiting from AI-powered productivity tools and implementing measures to mitigate potential risks. As the use of AI in the workplace becomes more prevalent, it is essential for companies to carefully assess and manage the security implications of these technologies.