Is ChatGPT safe?

Justyna Obara
Cybersecurity Content Writer
Is ChatGPT safe

For many, artificial intelligence was a somewhat theoretical concept until OpenAI introduced ChatGPT. Released at the end of 2022, it needed only five days to surpass 1 million users. Within a year, over 180 million people around the globe now use this chatbot. For sure, plenty of people registered on simply out of curiosity. However, the number of ChatGPT daily users for private and work purposes is growing exponentially.

Usually, it’s the OpenAI tool answering our queries, but let’s do things differently today and actually ask ourselves two fundamental questions: “How does ChatGPT work?” and “Is ChatGPT safe to use?”

What is ChatGPT, and how does it work

ChatGPT, short for Chat Generative Pre-trained Transformer, is a chatbot that uses artificial intelligence to mimic human language and conversations. It’s a large language model that pulls data from large datasets. It uses deep-learning algorithms, including neural networks, to process the information and generate almost-human-like text answering users’ queries.

ChatGPT is not the first AI-based tool that made its way into our lives. However, there’s a significant difference between OpenAI’s child and Siri or Google Assistant: ChatGPT learns from us, humans. It may sometimes produce inaccurate or even biased information, but it gets smarter and more reliable with every prompt and piece of user feedback.

Now, let’s dive into the concerns related to ChatGPT security.

ChatGPT security concerns

We can put the safety concerns related to ChatGPT into a few different categories:

  • Data security risks

To talk with ChatGPT, a user must register on the creators' website, The platform requires your name, email address, password, and phone number to create an account, and – if you’re going for the paid version – payment details. All the mentioned data will be available to and stored by OpenAI, possibly putting you at risk in case of a data breach.

OpenAI also collects chat history, which became available for other users during the ChartGPT 9-hour outage in March 2023. Later, OpenAI released a report stating that the bug in the open-source library also allowed unauthorized users to see the beginning of someone else’s conversations, account details, and payment information (including the four last digits of credit card numbers)! The company states: “The full credit card numbers were not exposed at any time,” but the leaked data may have put the chatbot’s users in danger of social engineering or phishing attacks. And there’s no guarantee that similar data breaches won’t happen again.

  • Misuse of ChatGPT

ChatGPT can produce many lines of code at a speed humans can only dream of. No wonder it became an everyday tool for many programmers, hackers included. The chatbot can generate code to create malware or detailed instructions on how to hack a computer, which, combined with dark web forums and programming skills, may be a powerful weapon in the hands of cybercriminals.

As another example of possible misuse, the tool created by OpenAI is also known for creating text in many different styles. If it can mimic an acclaimed writer, and it can easily generate a huge number of perfectly crafted phishing emails.

  • Scam ChatGPT applications

Before releasing an Android app at the end of July 2023, ChatGPT was only available on desktops and iPhones. However, apps masquerading as ChatGPT flooded the internet, spreading malware or making people pay for services OpenAI provides for free.

Since the roll-up of legitimate apps and the removal of scam ones, the risk connected with fake ChatGPT apps has decreased. Still, if you’re asking yourself, "Is ChatGPT safe to download?”, the answer is “Yes, but only from a reliable source.”

  • Spreading misinformation

ChatGPT is trained with vast amounts of data, including books, articles, and websites, and it reflects the opinions shared by the authors. It can generate text containing false or misleading information that may lead to prejudice and bias. In times of “fake news,” it’s vital to cross-check data. ChatGPT is no exception.

ChatGPT security measures

OpenAI seems to take ChatGPT security seriously. The company has implemented several measures to ensure the safety of chatbot users and their private information.

Access control: OpenAI limits access to its models and data to a select group within the organization to prevent data breach or misuse.

Encryption: Communication and data storage related to ChatGPT and other OpenAI models are encrypted to protect against unauthorized interception or access.

Monitoring and logging: OpenAI monitors ChatGPT usage and responds to any unusual or unauthorized activity.

Regular audits and assessments: The creators of ChatGPT conduct regular security audits and assessments to identify and address vulnerabilities, including internal and external reviews, to ensure a comprehensive evaluation.

Collaboration with security researchers: OpenAI also collaborates with the broader security research community, encouraging responsible disclosure of identified vulnerabilities.

User authentication: Users interacting with OpenAI's most famous creation are required to authenticate their identities.

Compliance with regulations: OpenAI complies with relevant data protection and privacy regulations that ensure appropriate and secure data handling. Details and the company’s policies can be found on

Addressing bias: Bias in AI models can emerge from the data they are trained with and can reflect and perpetuate existing societal biases. OpenAI claims to train ChatGPT on diverse data sets that represent a wide range of perspectives and backgrounds. It also develops bias mitigation methods to identify and reduce biases in the chatbot’s answers.

How to use ChatGPT safely

ChatGPT’s security raises many questions and it certainly is not bulletproof. Check out our tips on how to stay protected while using OpenAI’s chatbot.

1. Avoid fake websites and apps

Always interact with ChatGPT via its website, or its official mobile app. The fake applications may harvest your data, make you pay for functions that are supposed to be free, or even install malware on your device.

2. Secure your account with a strong password

Your account information and chat history are only as safe as your password. It should always contain more than eight characters, including upper- and lowercase characters and symbols. Use the online Password Generator to create complex and random login credentials and check how secure your current password is. Or, choose the easier way to safety: set up and manage login credentials in the NordPass password manager.

3. Don’t share personal information or content

Interactions with ChatGPT are not private. OpenAI can use your chat history for research and model improvement purposes which is why you should never share your personal, confidential, or sensitive information, such as passwords or financial details. Also, be cautious when discussing personal or sensitive topics, especially if they can lead to identifying you.

4. Cross-check the information and be aware of bias

ChatGPT reflects the opinions and biases of the data sets it’s been trained with. That's why you should always cross-check the information the chatbot serves you with reliable sources and approach them with a healthy dose of skepticism.

5. Report issues

Provide feedback to OpenAI if you encounter any issues, biases, or inappropriate behavior with ChatGPT. To do that, log in to your account and use the “Help” button to start a conversation. If you don't have an OpenAI account or can't log in, go to and select the chat bubble icon in the bottom right.


What is ChatGPT doing with my data?

OpenAI uses personal information to provide, maintain, improve, and analyze ChatGPT. The company also develops new programs and services based on user data and carries out business transfers. Note: According to its privacy policy, OpenAI may, in some instances, provide user data to third parties without further notice.

Does ChatGPT record data?

Yes, ChatGPT saves and stores user data, including:

  • Usage data (location, the time, and the chatbot version).

  • Log data (user’s IP address, the browser).

  • Device data (user’s type of device and operating system).

  • Content produced during the conversations with the chatbot.

Does ChatGPT sell your data?

OpenAI claims not to sell or share user data for marketing and advertising purposes. However, its privacy policy states that the company may share users' private information with third-party vendors and service providers, which raises some concerns.

Is ChatGPT confidential?

No, ChatGPT is not confidential. The app logs users' conversations and other personal data to train its model. OpenAI can also share users' private information with third parties like vendors or legal authorities. The company claims to put a lot of effort into privacy policies, but there’s already been an incident when users' data and conversation history were exposed.

Is ChatGPT safe to use at work?

The most considerable risk for enterprises is that people think ChatGPT is a tool to cut mundane tasks, something like a cutting-edge calculator. However, the information employees share with the free OpenAI chatbot can go into the cloud or be logged into its servers and revealed to different users during the conversation.

OpenAI offers an app for business, ChatGPT Enterprise, with dedicated privacy and security features. It doesn’t train on the company’s data, making it more secure for work.

Keep in mind that the business version of the chatbot doesn’t solve issues related to processing unreliable information or bridging the property rights of books, articles, and websites on which ChatGPT is being trained.

Is ChatGPT safe for kids?

ChatGPT is available for users over 13, and it’s unsafe for younger children to use it unsupervised. Despite the safety mitigations OpenAI implemented, there are many examples of the chatbot producing content not suitable for children.

Parents should also be wary of ChatGPT reproducing unreliable or biased information.

Is ChatGPT safe for students?

ChatGPT can be helpful for research but lacks critical thinking and analysis abilities. It can provide false information, so you should always cross-check it with reliable sources.

The OpenAI chatbot is being trained on books and articles whose ownership it doesn’t acknowledge, which can lead to copyright issues, plagiarism, and incorrect source quotations.

Should I use my real name on ChatGPT?

You should avoid sharing any private information while interacting with ChatGPT. Consider using a pseudonym or removing your name from the queries.

Why does ChatGPT need my phone number?

OpenAI needs your phone number for authentication purposes, to ensure you’re a real person, and to secure your account.

Remember, your private information, including the phone number, is unavailable to the chatbot itself. And you should never share this kind of info with it!

Can ChatGPT access any information from my computer?

ChatGPT is a text-based model that processes interactions on its servers. The model generates responses based on the input it receives, but it cannot access files on your device, or retrieve personal data from your computer.

There is some technical data that OpenAI automatically collects, like your log and usage data and device information. To find out more, check the company’s privacy policy.

How do I delete my chat history on ChatGPT?

To delete your chat history:

  1. Sign in to ChatGPT.

  2. Click your account icon on the bottom left corner of your screen (desktop) or in the menu bar (app).

  3. Choose “Settings.”

  4. Select “Data controls.”

  5. Click “Clear chat history” and then “Confirm.”

You can also remove a specific conversation by clicking its entry on the left hand-side and then choosing the trash can icon.

Can you delete your ChatGPT account?

You can submit a request to delete your account through or do it yourself.

To delete your ChatGPT account manually:

  1. Sign in to ChatGPT.

  2. Click your account icon on the bottom left corner of your screen (desktop) or in the menu bar (app).

  3. Choose “Settings.”

  4. Go to “Data controls.”

  5. Then, choose “Delete account” and “Confirm.”

Remember that after deleting the account, you won’t be able to create a new one using the same email address.

Subscribe to NordPass news

Get the latest news and tips from NordPass straight to your inbox.