Home » ChatGPT in the company, the risk of disclosure of confidential data

ChatGPT in the company, the risk of disclosure of confidential data

by admin
ChatGPT in the company, the risk of disclosure of confidential data

Second Cyberhaven Labsmisuse of the popular AI chatbot model by businesses could lead to disclosure of sensitive and confidential data: According to experts, a growing number of employees of private companies are providing sensitive company data to ChatGPT unaware of the risks they expose companies to.

The motivation behind this dangerous practice is that the artificial intelligence chatbot developed by OpenAI significantly improves their productivity, according to the employees interviewed.

Future and controversy

Artificial intelligence beyond human intelligence and humans like ants: OpenAI and the risks of Strong AI

by Emanuele Capone


Cyberhaven Labs researchers analyzed the use of ChatGPT by 1.6 million workers in companies of all sectors. The results of the study are as interesting as they are worrying. According to the research, 5.6% of workers whose activities we monitored have used ChatGPT in their workplace, and 4.9% have provided business data to the popular chatbot model since it was launched.

Clearly you completely ignore the operation of the chatbot, which also uses the data provided by users to build its knowledge base, and that this knowledge is then shared publicly by the system itself to respond to user requests. The disclosure of sensitive data is a crucial problem for companies that take specific measures to prevent the leakage of this information. Unfortunately, the solutions in use today cannot prevent an employee from to copy the content of a corporate document on the popular chat to extract a summary or to improve its form. The last resort is therefore to prevent access to platform by employees, a solution already adopted by giants such as Verizon and JP Morgan.

See also  Saturn's satellite Mimas may contain a hidden ocean beneath the surface

Another fact that emerges from the research is that only 0.9% of employees are responsible for 80% of leaks caused by copy-pasting company data into the chatbot. A significant increase in this percentage is to be expected in the coming months, especially in light of the integration of the technology into more services, for example through the use of the ChatGPT API.

“Since ChatGPT was publicly launched, 5.6% of employees have tried a use it at least once in the workplace. In addition, 2.3% of employees have entered confidential company data into ChatGPT – reads the report published by Cyberhaven Labs – Despite a growing number of companies openly blocking access to ChatGPT, usage continues to grow exponentially. On March 1st, our product detected a record number of 3381 attempts of pasting company data in ChatGPT for 100,000 employees”.

The researchers also tracked workers who copied data from the popular chatbot and pasted it elsewhere, such as in a company email, in Google Docs or in their code editor source. According to research, employees copy data from the chatbot more than they paste company data into ChatGPT, almost 2 to 1.

The research also highlights how the average company fares leak sensitive data to ChatGPT hundreds of times every week. For example, during the week of February 26 to March 4, workers at an average company with 100,000 employees submitted confidential documents to the chatbot 199 times, customer data 173 times, and source code 159 times.

Clearly ChatGPT cannot be considered one technology to keep away from our businesseshowever, you should understand the risks of misusing it and educate your employees on how to use its powerful capabilities while avoiding the potential disclosure of confidential information.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy