ChatGPT and Workplace Confidentiality
ChatGPT records every conversation and can even get to know you by serving as a proofreading assistant for personal or professional documents. Accidentally sharing confidential details like client contact info could result in its accumulation.
Some companies have banned its use in their workplace due to security risks; to reduce risks further, consider instituting the following policies.
1. Personal Information
As a general guideline, ChatGPT should not be used to disclose personal information like your name, phone number or email address as this could enable others to identify you and reach out to you – potentially even making its way into public databases.
Your use of ChatGPT in the workplace should also be guided by caution, particularly with confidential company data or personal details that would otherwise remain largely hidden from others. As an example, avoid using it for creating resumes or job applications as its output might contain sensitive details that competitors could use to evaluate candidates or employees more objectively. Furthermore, its training data has likely come from online sources which may not always provide up-to-date or accurate results resulting in potentially inaccurate or harmful medical advice being produced by this tool.
However, there can be compelling arguments for using AI tools at work – particularly for tasks that don’t require deep thought or specific knowledge. Lawyers use ChatGPT free version to generate basic legal documents such as client letters and emails and edit drafts while software developers rely on its assistance for their coding projects.
OpenAI takes cyber security very seriously and has implemented various safeguards, but no system is invulnerable to vulnerabilities that can be exploited by bad actors. Therefore, employers should provide clear usage policies and training on both acceptable and inappropriate usage of ChatGPT in their workplace environment.
Your policy could require employees to identify content generated from ChatGPT sessions as being created through it, either internally or externally (and keep an audit trail of which prompts were used), before sharing or making public. Leaders should coach employees on how to be transparent when using this new technology and discourage any attempt to take credit for output from ChatGPT sessions as their own.
2. Trade Secrets
Employees using ChatGPT to create documents may unwittingly disclose sensitive, confidential data that should be protected. Cyberhaven Labs found that sensitive/internal only data was most often pasted into ChatGPT (319 incidents per week for every 100,000 employees), followed by source code (278 incidents) and client data (260 incidents).
Even more serious are instances in which ChatGPT generates content based on company information that belongs to another individual or entity and without their express permission is shared without legal rights, potentially opening them up to claims of unfair competition, false advertising and other violations of law.
Keep in mind that ChatGPT was trained using wide swathes of public online information – from social media platforms and conversations about work-related matters, hopes and fears, personal details to personal details shared during conversations – which means it may include some of these details when used by an employee to produce documents sent directly to clients or shared publicly.
To mitigate these risks, organizations should implement policies and practices which include:
Training: Conduct regular employee education sessions regarding acceptable and prohibited uses of ChatGPT at work, and make sure this training remains up-to-date.
Inventory: Establish a process to track ChatGPT usage at work and assess it as low-, medium- or high-risk using established criteria (and make any updates as needed).
Internal Labelling: Ensuring all documents created using ChatGPT for work are clearly labeled as such.
External Transparency: Offering clients an easy way to identify content generated by ChatGPT (though this isn’t a foolproof solution).
Limit the use of ChatGPT at work only in projects where it can be shown that it will lead to increased productivity and efficiency. Organizations who successfully navigate these issues stand a greater chance of reaping the benefits of ChatGPT without risking exposure of sensitive or confidential information to third parties.
3. Business Confidential Information
Confidential information includes anything of material value to a business that cannot be discovered externally. This could include internal proprietary information, client data, market research results, financial projections or operational plans that cannot be learned through outside channels. Generative AI tools like ChatGPT pose unique challenges when it comes to protecting confidentiality in the workplace due to not easily identifiable data or modifications that have altered its original form.
Generative AI presents several risks that employees need to be wary of, including over-reliance on its tools for tasks or decision making, which can reduce productivity in higher level tasks like policy refinement and quality control. Furthermore, too much reliance may inhibit employees from engaging in critical thinking and devising innovative solutions to complex problems.
Generative AI may expose sensitive or personal data to unauthorised parties, potentially leading to privacy violations and/or financial losses. For instance, cyber attacks targeting providers of ChatGPT services could compromise conversation logs and user details that reside therein.
Companies should establish clear policies regarding the use of generative AI in the workplace, with training and monitoring included as part of these policies. Furthermore, companies should require all uses of generative AI be reported to a team which then evaluates them for high, medium or low risks while keeping records of this evaluation process.
Companies should also review the privacy and data retention policies of all generative AI tools and services to ensure they do not use these AI programs to share confidential or otherwise sensitive information with unapproved third parties.
Debevoise advises clients to engage in an in-depth dialogue with their teams and legal counsel regarding the appropriate use of generative AI in their workplaces, so as to mitigate its potential risks and ensure confidentiality of information they work with. Debevoise suggests engaging in such discussions regarding appropriate use.
4. Intellectual Property
There is an understandable concern that confidential business data could be misused to damage a company, though this threat likely won’t be as wide-spread as some headlines imply. Businesses which can successfully utilize AI tools without exposing confidential business data stand to benefit in terms of employee productivity and efficiency in the workplace.
PwC Australia, for example, has informed its employees of potential data protection risks associated with ChatGPT at work and advised them only to use it personally and never share firm or client data via it.
Businesses should take great care to clearly define and train its workforce on how to use generative AI effectively and establish internal policies and procedures on how to handle queries made by employees pertaining to GAI. Such policies should include reviewing and approving any data sets used in output generation as well as data deletion or reidentification by AI models as well as any potential issues caused by using GAI.
Companies should also train their workforce on the limitations of generative AI and its inexact answers, so employees understand it cannot replace fact checking or quality control duties. Finally, organizations should establish non-compete, non-solicitation, confidentiality agreements to safeguard against employees who might breach company policies by sharing confidential data with an outside entity such as ChatGPT.
As the use of AI becomes more prevalent, companies should revisit their cybersecurity and data privacy practices. A thorough examination should be made of current policies and procedures; particularly if your company is subject to regulations such as laws, contracts or insurance-based requirements on top of internal privacy policies. If you require more assistance regarding GDPR compliance in the workplace please reach out to Debevoise Data Blog Clerk Lex Gaillard or one of our other data lawyers for advice.