Source Code Exposed on ChatGPT Sparks Urgent Concerns
Netskope has published a report revealing that source code is being posted to ChatGPT more frequently than any other form of sensitive data. The report shows that there are 158 incidents of source code exposure per 10,000 users per month on the platform. With the rising popularity of ChatGPT, this poses significant cybersecurity risks.
To address the growing concerns, Netskope has incorporated artificial intelligence (AI) capabilities into its secure access service edge (SASE) platform. This enhancement enables the platform to effectively identify potential threats while also providing data classification and classifier technology. With proper training, the AI can recognize new data and improve threat detection accuracy.
Naveen Palavalli, Vice President of Product Go-to-Market Strategy for Netskope, explained that their AI technology is an expansion of their existing cloud-based SASE platform. By leveraging AI and machine learning algorithms, Netskope enhances its data loss prevention (DLP) capabilities to monitor network traffic, bolster performance, and identify threats.
DLP has become a growing concern as users increasingly upload sensitive information, including source code, healthcare data, and personally identifiable information, onto publicly accessible services like ChatGPT. Palavalli emphasized the need for proper data restrictions; without them, the data can be used to train large language models (LLMs) and potentially become available to anyone.
While some organizations have opted to ban platforms such as ChatGPT due to these concerns, Palavalli suggests that employing a DLP capability that alerts users when sharing sensitive data is a more practical solution. Without proper guidance, end-users may find ways to use generative AI platforms clandestinely.
With the evolving landscape of AI, organizations are realizing the need to rely on cloud-based platforms to store and analyze vast amounts of data necessary for training AI models. Palavalli noted that most organizations lack the IT resources required for independent data collection, storage, and analysis.
Organizations are currently engaged in an AI-powered cybersecurity arms race against cybercriminals who exploit generative AI to launch attacks such as multimedia-based phishing attempts. To win this ongoing battle, organizations must align themselves with vendors that possess the necessary resources to keep pace with cybercriminal investments in AI.
While numerous cybersecurity vendors are investing in AI, organizations must accept that most cybersecurity professionals prefer working for companies that provide them with the tools needed for success. Those lacking access to cybersecurity AI platforms will be vulnerable targets for cybercriminals who have started leveraging AI across various nefarious attack vectors.
As organizations brace themselves for potential attacks, mitigating the risks associated with source code exposure on platforms like ChatGPT is crucial. Establishing robust cybersecurity measures, including proper data restrictions and DLP capabilities, will play a pivotal role in safeguarding sensitive information against unauthorized access. By prioritizing AI-driven cybersecurity solutions, organizations can bolster their defenses and better protect against emerging threats.