For many organisations, the most important factor when using an LLM (such as ChatGPT) is privacy and security. More specifically, ensuring that confidential and personal information is not made publicly available and complies with an organisation’s privacy and security requirements.

While the language model itself poses minimal risks, it is crucial to examine how data is handled throughout the process, including transmission, storage, usage and distribution. In this article, we’ll delve into a practical example to shed light on the intricacies of privacy and security within the context of using an LLM (such as ChatGPT) hosted and accessed through an external platform like Ambit AI.

Privacy Risks and Mitigation Measures: In our example, an employee intends to generate customer correspondence using ChatGPT, prompting the model with personal information. Several high-level privacy risks arise, necessitating careful consideration and proactive measures. Let’s explore them:

  1. Personally Identifiable Information (PII) in Prompt and Completion Data: The prompt and completion data contain sensitive PII, including names, customer numbers, phone numbers, addresses, and financial details. To mitigate this risk, one can either verify the external platform’s secure handling of data or redact PII within the prompt itself. Redaction involves masking sensitive information to prevent its inclusion in the transmitted prompt.
  2. Secure Transmission of Data: Verifying the secure transmission of data over the internet is essential. Organisations should ensure that the external platform hosting the LLM implements robust security measures to safeguard data during transmission.
  3. Logging and Recording of Prompt and Completion Data: Providers often log information and operations for security and debugging purposes. While LLM platforms attempt to detect and redact PII data, 100% accuracy is not guaranteed. To minimise risks, it is recommended to redact sensitive information before sending the data to the LLM platform.
  4. Privacy and Security Policies of the Platform: Familiarising oneself with the privacy and security policies of the platform is crucial. Platforms, including Ambit AI, may provide options to exclude prompt and completion data from being logged and recorded. Understanding these policies ensures alignment with organisational requirements and enhances data protection.
  5. Usage of Prompt and Completion Data for Training: Some free-to-access LLM platforms use prompt and completion data as part of their training process. Evaluating the privacy policies of the LLM platform and the specific LLM employed can shed light on the steps taken to anonymise or remove PII data. However, complete removal of PII data is rarely guaranteed.
  6. Third-Party Data Sharing: The possibility of third-party access to prompt and completion data poses a challenge, especially when LLM platforms operate as black boxes. Relying on the privacy policies and reputation of the provider becomes essential. Choosing a trusted platform with robust privacy and security measures helps ensure the confidentiality of prompt and completion data.
  7. Security of the LLM Hosting Platform: Ensuring the security of the platform hosting the LLM is vital. Partnering with a trusted provider like Ambit AI, which prioritises comprehensive privacy and security policies, reduces the risk of security vulnerabilities and unauthorised access.

In conclusion, while the LLM itself does not inherently pose privacy and security risks, organisations must carefully evaluate the handling of prompt and completion data. By adopting proactive measures, such as redaction, verifying secure transmission, understanding platform privacy policies and partnering with reputable providers like Ambit AI, organisations can leverage the power of GPT while safeguarding confidentiality. Protecting sensitive information throughout the process ensures compliance and reinforces the trust between organisations and their customers.

Start automating customer service today

Tell us about yourself and we’ll get back to you shortly.

Please enable JavaScript in your browser to complete this form.