Skip to main content

Is ChatGPT safe?

Ruta Tamosaityte
Content Writer
Is ChatGPT safe

For many, artificial intelligence was a somewhat theoretical concept until OpenAI introduced ChatGPT. Released at the end of 2022, it needed only 5 days to surpass 1 million users. Now, over 180 million people around the globe use this chatbot. Undoubtedly, plenty of people registered on openai.com simply out of curiosity. However, the number of people using ChatGPT daily for personal and professional purposes is growing exponentially.

Usually, it's the OpenAI tool answering our queries, but today, let's do things differently and ask ourselves two fundamental questions: “How does ChatGPT work?” and “Is ChatGPT safe to use?”

So, is ChatGPT safe?

Although ChatGPT has multiple built-in security features and is generally considered safe, there are still some privacy concerns. Unfortunately, like any other online tool, it’s not without its own risks. Therefore, it’s essential to stay informed about these risks and learn how to mitigate them. In this article, we’ll do just that.

What is ChatGPT, and how does it work?

ChatGPT is a form of generative AI, a chatbot that, once provided with a prompt, returns human-like text, images, or videos and even offers conversation. It uses deep learning algorithms, including neural networks, to process the information and generate text responses that are nearly indistinguishable from human language.

ChatGPT security concerns

Here’s an array of cybersecurity concerns that we advise taking into consideration:

  • Retention and data leakage. ChatGPT collects user data from prompts, which may include sensitive information, and retains chat histories for at least 30 days. While this data can be used to improve services, it also raises concerns about data protection and security. Fortunately, you can manage your data by turning off the "Improve the model for everyone" feature in Data Controls. This prevents your chats from being used to train the model.

However, despite these controls, there is still a risk of data leaks and breaches. AI tools aren’t immune to data breaches. One significant ChatGPT security breach exposed over 225,000 credentials on the dark web. Once malicious actors gain access to these accounts, they can view your complete chat history, including any sensitive data that you shared with the AI tool.

  • Misuse. This artificial intelligence tool can produce large amounts of code at a speed humans can only dream of. No wonder it’s become an everyday tool for many programmers, hackers included. This raises the risk that the chatbot could be used to create malware or provide detailed instructions on how to hack a computer. Combined with dark web forums and programming skills, this could be a powerful weapon in the hands of cybercriminals.

    Another example of possible misuse is ChatGPT scams. Since it can mimic different writing styles and generate large amounts of text, this AI language model becomes a potential tool for phishing scams. With its ability to create perfectly crafted phishing emails, ChatGPT could be used to deceive even the most cautious individuals.

  • Fake ChatGPT apps. The tale is almost as old as the internet itself—cybercriminals will try to trick people into revealing sensitive data or scam them out of their money by pretending to be a legitimate service. ChatGPT is no exception, with fake apps flooding the internet and spreading malware or making people pay for services that OpenAI provides for free.

    While these malicious apps seem to have been removed from official stores, some risks still remain present. For instance, you can come across them in phishing messages that promote ChatGPT and make you download a fake ChatGPT app. Interaction with these apps can result in dire consequences such as financial loss or identity theft.

  • Spreading misinformation. ChatGPT, like other AI tools, is trained on vast amounts of data, including books, articles, and websites. It reflects the opinions of the authors of that data. It can generate text containing false or misleading information that may perpetuate prejudice and bias. In times of “fake news,” it's vital to cross-check data. ChatGPT is no exception.

  • Prompt injection attacks. This is yet another ChatGPT safety concern in which malicious actors craft prompts designed to trick the large language model into revealing sensitive data or bypassing safety guardrails. Attackers can extract private data, including personally identifiable information and proprietary content, by prompting the AI language model to repeat specific words indefinitely.

ChatGPT security measures

OpenAI appears to prioritize the security of its AI tool, ChatGPT, by implementing several measures to ensure its safety and address user privacy concerns regarding their private information.

Access control: OpenAI limits access to its models and data to a select group within the organization to prevent data breach or misuse.

Encryption: Communication and data storage related to ChatGPT and other OpenAI models are encrypted to protect against unauthorized interception or access.

Monitoring and logging: OpenAI monitors ChatGPT usage and responds to any unusual or unauthorized activity.

Regular audits and assessments: The creators of ChatGPT conduct regular security audits and assessments to identify and address vulnerabilities, including internal and external reviews, to ensure a comprehensive evaluation.

Collaboration with security researchers: OpenAI also collaborates with the broader security research community, encouraging responsible disclosure of identified vulnerabilities.

User authentication: Users interacting with OpenAI's most famous creation are required to authenticate their identities.

Compliance with regulations: OpenAI complies with relevant data protection and privacy regulations that ensure appropriate and secure data handling. Details and the company’s policies can be found on trust.openai.com.

Addressing bias: Bias in AI models can emerge from the data they are trained with and can reflect and perpetuate existing societal biases. OpenAI claims to train ChatGPT on diverse data sets that represent a wide range of perspectives and backgrounds. It also develops bias mitigation methods to identify and reduce biases in the chatbot’s answers.

How to use ChatGPT safely

ChatGPT’s security raises many questions and it certainly is not bulletproof. Check out our tips on how to stay protected while using OpenAI’s chatbot.

1. Avoid fake websites and apps

Always interact with ChatGPT via its website chat.openai.com, or its official mobile app. The fake applications may harvest your data, make you pay for functions that are supposed to be free, or even install malware on your device.

2. Secure your account with a strong password

Your account information and chat history are only as safe as your password. It should always contain more than eight characters, including upper- and lowercase characters and symbols. Use the online Password Generator to create complex and random login credentials and check how secure your current password is. Or, choose the easier way to safety: set up and manage login credentials in the NordPass password manager.

3. Don’t share personal information or content

Interactions with ChatGPT are not private. OpenAI can use your chat history for research and model improvement purposes which is why you should never share your personal, confidential, or sensitive information, such as passwords or financial details. Also, be cautious when discussing personal or sensitive topics, especially if they can lead to identifying you.

4. Cross-check the information and be aware of bias

ChatGPT reflects the opinions and biases of the data sets it’s been trained with. That's why you should always cross-check the information the chatbot serves you with reliable sources and approach them with a healthy dose of skepticism.

5. Report issues

Provide feedback to OpenAI if you encounter any issues, biases, or inappropriate behavior with ChatGPT. To do that, log in to your account and use the “Help” button to start a conversation. If you don't have an OpenAI account or can't log in, go to help.openai.com and select the chat bubble icon in the bottom right.

FAQ

What is ChatGPT doing with my data?

OpenAI uses personal information to provide, maintain, improve, and analyze ChatGPT. The company also develops new programs and services based on user data and carries out business transfers. Note: According to its privacy policy, OpenAI may, in some instances, provide user data to third parties without further notice.

Does ChatGPT record data?

Yes, ChatGPT saves and stores user data, including:

  • Usage data (location, the time, and the chatbot version).

  • Log data (user’s IP address, the browser).

  • Device data (user’s type of device and operating system).

  • Content produced during the conversations with the chatbot.

Does ChatGPT sell your data?

OpenAI claims not to sell or share user data for marketing and advertising purposes. However, its privacy policy states that the company may share users' private information with third-party vendors and service providers, which raises some concerns.

Is ChatGPT confidential?

No, ChatGPT is not confidential. The app logs users' conversations and other personal data to train its model. OpenAI can also share users' private information with third parties like vendors or legal authorities. The company claims to put a lot of effort into privacy policies, but there’s already been an incident when users' data and conversation history were exposed.

Is ChatGPT safe to use at work?

The most considerable risk for enterprises is that people think ChatGPT is a tool to cut mundane tasks, something like a cutting-edge calculator. However, the information employees share with the free OpenAI chatbot can go into the cloud or be logged into its servers and revealed to different users during the conversation.

OpenAI offers an app for business, ChatGPT Enterprise, with dedicated privacy and security features. It doesn’t train on the company’s data, making it more secure for work.

Keep in mind that the business version of the chatbot doesn’t solve issues related to processing unreliable information or bridging the property rights of books, articles, and websites on which ChatGPT is being trained.

Is ChatGPT safe for kids?

ChatGPT is available for users over 13, and it’s unsafe for younger children to use it unsupervised. Despite the safety mitigations OpenAI implemented, there are many examples of the chatbot producing content not suitable for children.

Parents should also be wary of ChatGPT reproducing unreliable or biased information.

Is ChatGPT safe for students?

ChatGPT can be helpful for research but lacks critical thinking and analysis abilities. It can provide false information, so you should always cross-check it with reliable sources.

The OpenAI chatbot is being trained on books and articles whose ownership it doesn’t acknowledge, which can lead to copyright issues, plagiarism, and incorrect source quotations.

Should I use my real name on ChatGPT?

You should avoid sharing any private information while interacting with ChatGPT. Consider using a pseudonym or removing your name from the queries.

Why does ChatGPT need my phone number?

OpenAI needs your phone number for authentication purposes, to ensure you’re a real person, and to secure your account.

Remember, your private information, including the phone number, is unavailable to the chatbot itself. And you should never share this kind of info with it!

Can ChatGPT access any information from my computer?

ChatGPT is a text-based model that processes interactions on its servers. The model generates responses based on the input it receives, but it cannot access files on your device, or retrieve personal data from your computer.

There is some technical data that OpenAI automatically collects, like your log and usage data and device information. To find out more, check the company’s privacy policy.

How do I delete my chat history on ChatGPT?

To delete your chat history:

  1. Sign in to ChatGPT.

  2. Click your account icon on the bottom left corner of your screen (desktop) or in the menu bar (app).

  3. Choose “Settings.”

  4. Select “Data controls.”

  5. Click “Clear chat history” and then “Confirm.”

You can also remove a specific conversation by clicking its entry on the left hand-side and then choosing the trash can icon.

Can you delete your ChatGPT account?

You can submit a request to delete your account through privacy.openai.com or do it yourself.

To delete your ChatGPT account manually:

  1. Sign in to ChatGPT.

  2. Click your account icon on the bottom left corner of your screen (desktop) or in the menu bar (app).

  3. Choose “Settings.”

  4. Go to “Data controls.”

  5. Then, choose “Delete account” and “Confirm.”

Remember that after deleting the account, you won’t be able to create a new one using the same email address.