Chat icon
By Tim Warren on February 21, 2023

Security and Privacy with GPT

OpenAI’s ChatGPT Models and other similar public tools have the capability to learn and store large amounts of data about its users - especially their input text. While this technology has many benefits, it also raises concerns about privacy and security risks. In this blog, we will discuss the potential risks associated with using OpenGPT Models and offer some tips on how to mitigate these risks. We use the term OpenGPT to mean any publicly available and publicly hosted Generative Pretrained models.

Privacy Risks with OpenGPT Models

One of the most significant privacy risks associated with using OpenGPT Models is the potential for data breaches. If the system is hacked or otherwise compromised, the sensitive information that it has been trained on and fed in user prompts could be accessed by malicious actors. This includes personal data like names, addresses, phone numbers, and email addresses, as well as sensitive information like credit card numbers and social security numbers if they have been entered.

Another concern with using OpenGPT Models is the potential for unintended data sharing. This can occur if the AI assistant shares information with third-party companies without the user’s consent. For example, if the company that owns OpenGPT Models decided to sell user input data to advertisers or other third-party entities. This would be a severe violation of user privacy, as it would allow outside parties to access private information without permission.

Generally, there is the risk of data misuse by the OpenGPT Models system itself. If the AI assistant is not properly programmed or monitored, it could use data in ways that are unexpected or unwanted. This could include using personal information to make decisions about a user’s online activity or behaviour, or using data to create profiles or other types of targeted marketing which in itself can introduce bias.

These include:

1: bias that exists naturally in the underlying data used to train the model

2: bias that occurs in the technology used to generate content

3: the risk of purposely introduced bias in order to skew opinions and views

Reverse engineering these large models to detect purposely introduced bias is costly and time-consuming.

Security Risks with OpenGPT Models

In addition to privacy concerns, there are also significant security risks associated with using OpenGPT Models. One of the most significant security risks is the potential for hackers to gain access to the system and use it for nefarious purposes. This could include stealing sensitive information, spreading malware, or using the system as a gateway to other systems.

Another risk associated with OpenGPT Models is the potential for users to accidentally reveal sensitive information to the system. For example, if a user shares their password or other sensitive information with OpenGPT Models, the system could store that information and make it vulnerable to outside attacks.

Finally, there is the risk of OpenGPT Models being used to spread misinformation or propaganda. This could occur if the system is not properly trained to recognise and flag fake news or other types of misleading content. If OpenGPT Models are not equipped to detect and filter out these types of content, it could be used to spread misinformation and sow discord online.

Tips for Mitigating Risks with OpenGPT Models:

There are several steps that users can take to mitigate the risks associated with using OpenGPT Models. One of the most important steps is to be cautious about what information is shared with the system. Users should avoid sharing sensitive information like passwords or credit card numbers, and should limit the amount of personal information that is shared with the system.

Another important step is to use strong passwords and to change them frequently. This can help to reduce the risk of data breaches, as it makes it more difficult for hackers to gain access to sensitive information. Additionally, users should be cautious about the links and attachments they open when using OpenGPT Models, as these can be used to spread malware or other types of malicious software.

Another important step for mitigating risks with OpenGPT Models is to regularly update the system and any associated software. This can help to address any known security vulnerabilities and reduce the risk of attacks. Additionally, users should be cautious about the third-party apps and services that they use with OpenGPT Models, as these can introduce new security risks.

Finally, it is important for users to be aware of their rights when it comes to data privacy. This includes the right to know what information is being collected by OpenGPT Models, how that information is being used, and who has access to that information

Remember: GPT is pretrained on open source information taken from the web. To protect yourself from the risks listed in this article, ask yourself these questions:

1: When people are asking questions, how is this information stored and protected?

2: Could these questions / prompts contain confidential or personal information?

3: Are fine-tuned models protected and is there a way to reverse engineer a fine-tuned model in order to extract the information that was used to finetune it? And could this information be confidential information?

Ultimately if you want better security and assurance across privacy you are best off selecting GPT capability from a private – non-open provider that does not have the inherent risks listed above. This gives you far more control over your data security and privacy.

 

Published by Tim Warren February 21, 2023