With the emergence of ChatGPT, many members of our community are proactively exploring its use in academics, research, and administration. ChatGPT is a natural language processing tool driven by Artificial Intelligence (AI) technology that allows you to have human-like conversations and much more with the chatbot. In addition, the language model can answer questions and assist you with tasks, such as composing emails, essays, and code. The UCSB Information Security team would like to provide some guidance on its appropriate use through the lens of university policy and risk mitigation.
At present, ChatGPT should be used with the assumption that any personal, confidential, or otherwise sensitive information may not be protected. Do not use ChatGPT with sensitive or confidential information, such as student information, health information, financial information, staff information, personally identifiable information (PII), or personnel conduct data (i.e. performance reviews, etc.) Generally, any data classified by the University of California (UC) as Protection Level 3 or 4 should not be used. A guide with examples of P3 and P4 data can be found below in the References section.
Licensing Agreements
UC policy requires that all software licensing agreements with suppliers (and other entities), particularly those that will host UC data as part of its services, need the supplier to comply with various security and privacy practices. These include, but are not limited to, the prohibition of unauthorized use, disclosure, or sale of personal information and compliance with all cyber-liability UC insurance requirements. The data security and privacy provisions in these agreements are critical to UC’s ongoing efforts to protect the data and information related to its operations and the personal information of its staff, faculty, and students.
However, there is currently no agreement with OpenAI, the developer of ChatGPT, that would provide these types of protections. Consequently, the use of ChatGPT at this time could expose the individual user and UC to the potential loss and abuse of highly sensitive data and information. By entering information into the user prompts of ChatGPT, you may inadvertently disclose sensitive information. When you enter information into the tool, that information and the corresponding answer become a part of OpenAI's database. This information could potentially be included in responses to other people's questions. We are currently working with the UC Office of the President to analyze and address this issue.
Please be advised that, in accordance with UCSB Guidelines Concerning Click-Through Agreements, in the absence of an agreement covering the UC (including its staff, faculty, and students) your use of ChatGPT constitutes personal use, and obligates you as an individual to assume responsibility for compliance with the terms and conditions set forth in OpenAI’s own Terms of Use.
It is important to be aware that while ChatGPT is available directly via OpenAI’s website, it is also increasingly being embedded in third-party tools, and use of these tools should carry the same assumptions regarding data protection.
For further guidance on using ChatGPT, please review the references listed below that address OpenAI’s own recommendations and policies on educational use, research, streaming, and privacy. Please also note that OpenAI explicitly forbids the use of ChatGPT for specific categories of activity listed in their usage policy document.
This guidance regarding Chat GPT also applies to other Large Language Models (LLMs) and Machine Learning/AI tools owned by vendors with which UC or UCSB lacks an agreement with the provisions noted above. Further guidance on data protection with ChatGPT and other LLMs will be shared as soon as it becomes available.
References
- OpenAI support article: “How your data is used to improve model performance”
- OpenAI Usage policies
- OpenAI Sharing & publication policy
- OpenAI Privacy policy
- OpenAI Terms & policies
- UC Protection Level Classification Guide
- Quantamagazine: “The Unpredictable Abilities Emerging From Large AI Models”
- The Atlantic: “Don’t Be Misled by GPT-4’s Gift of Gab”
- UC Presidential Working Group on AI: Final Report and Recommendations