The Artificial Intelligence Buzz Explained
Usage, Security, and Privacy Concerns with ChatGPT
May 3, 2023There has been quite a buzz about this new Artificial Intelligence (AI) technology that can create uncannily human prose from a prompt – influencing everything from education to clinical care, to our research ecosystem. With great power, comes great responsibility.
ChatGPT is a state-of-the-art natural language processing tool developed by OpenAI that is accessible via a web-based chatbot or application programming interface (API). Its unique ability to interpret natural language text and produce contextually relevant and coherent responses has made it a popular choice for various applications, including customer service, educational assistance, text summarization, and conversational interfaces. It can even be used to develop and correct open software code. With its power and availability, ChatGPT has the potential to revolutionize the way we communicate with machines and each other.
However, it is essential to be aware of the security risks associated with using ChatGPT for confidential or personal information, and we must exercise caution when sharing sensitive data. While ChatGPT is a highly advanced and powerful language model, its design also poses risks.
First, the terms and conditions for ChatGPT and other AI services may allow inputted information to be used by the host companies for many purposes beyond a response to a user. Sharing sensitive information with ChatGPT or other such systems compromises that information, and can lead to additional dissemination to unauthorized parties. This is not a theoretical concern. It has already happened. ChatGPT’s own Frequently Asked Questions states: “No, we are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations.” Recent changes to the interface may allow some additional control over the reuse of information shared in the interface, but you must refrain from using ChatGPT or other chatbots on colleagues' or patients' confidential or personal information. Cutting and pasting personal or confidential information into ChatGPT or other web interfaces is a serious security and privacy breach. Even using AI-powered writing or coding assistants may expose confidential or private information since they invoke the same application programming interfaces (APIs), thus sending data to a company or third party.
These algorithms, like any other technology, are not foolproof. They model patterns in human writing and can create incorrect but believable responses or reinforce biases introduced during their creation.
We must constantly be vigilant and mindful of the information we share online or through the applications we use in our daily business. Sharing information with an AI requires extremely careful consideration and, while it may seem innocuous compared to sharing with a person, doing so may carry significant risks. We must follow best practices for safeguarding personal and sensitive information to protect those who entrust it to us.
Resources:
- ChatGPT Frequently Asked Questions
- Article on security and privacy concerns with ChatGPT and other AIs
- University of Colorado HIPAA Security Policy
- Anschutz Medical Campus Acceptable Use of Information Technology Resources
Contributors: Sean Davis, Casey Greene, Melissa Haendel, Jayashree Kalpathy-Cramer, Nikhil Madhuripan, Charlotte Russell