ChatGPT, short for Chat Generative Pre-Trained Transformer is the artificial intelligence chatbot from OpenAI. It is the fastest growing app in Internet history. ChatGPT has over 100 million users globally as of June 2023. Almost 53% of the audience believed ChatGPT generated content in the field of finance, health, technology, travel, and entertainment to be created by humans or edited by humans. These types of numbers made me wonder what ChatGPT can’t do. I asked ChatGPT about some of its limitations and here what it had to say.
ChatGPT can’t provide real-time information
ChatGPT’s knowledge is based on the data available up until September 2021. It doesn’t have access to current news or events that have occurred after its knowledge cutoff.
Why is ChatGPT’s knowledge cut-off in September 2021?
The knowledge cutoff exists because the model has a fixed dataset up to a certain point, which in this case is September 2021. After this date, any events, developments, or changes that occurred in the world are not part of the model’s training data. Consequently, ChatGPT cannot access information or knowledge beyond that cutoff date.- The reason for the knowledge cutoff is practicality. Training a language model like ChatGPT requires significant computational resources and time. As new data becomes available, it needs to be processed, curated, and used to fine-tune the model, which is a resource-intensive process. To ensure that the model remains up-to-date, periodic retraining is necessary, but it cannot be done instantaneously.
ChatGPT cannot predict the future
ChatGPT cannot accurately predict future events or outcomes. It doesn’t have access to real-time data or the ability to forecast future events.
Why can’t ChatGPT predict the future?
Uncertainty and complexity of the future: The future is inherently uncertain and complex. It depends on a multitude of variables, human actions, and unpredictable events. AI models like ChatGPT cannot accurately capture all these intricate factors and their interplay, making precise predictions challenging.- Ethical and practical concerns: Making accurate predictions about the future raises ethical and practical concerns. The potential for misuse, misinformation, or undue influence is significant. AI models are cautious about making definitive future predictions to avoid creating false expectations or spreading unreliable information.
- It’s important to note that while AI models may provide probabilistic insights based on historical patterns, they should not be relied upon as crystal balls for predicting specific future events or outcomes. Expert human judgment, domain knowledge, and up-to-date information from reliable sources are crucial for making accurate predictions.
Offer professional advice
In many of these cases ChatGPT will offer general advice, but it also includes a disclaimer, “While I cannot provide personalized advice, here are some points to consider.“
Why can’t ChatGPT offer professional advice?
Lack of specialized training: ChatGPT is a general-purpose language model and is not trained in any particular profession or domain. It lacks the specialized knowledge and training required to provide accurate and reliable advice in professional areas such as law, medicine, finance, or engineering.
• Risk of misinformation: Providing professional advice without proper training and expertise can lead to misinformation and potential harm. Incorrect advice can have serious consequences in fields where accuracy is critical for making decisions that affect people’s health, finances, or legal rights.
• Legal and ethical concerns: Offering professional advice without the necessary qualifications can be ethically problematic and may even violate laws or regulations in some jurisdictions. It’s essential to respect professional boundaries and ensure that advice comes from qualified and licensed professionals.
• Dynamic nature of professions: Many professional fields are constantly evolving, with new research, practices, and regulations being introduced. AI language models like ChatGPT do not have real-time access to the latest updates and may not be aware of recent developments in specific professions.
Guarantee 100% accuracy
ChatGPT aims to provide helpful and relevant information, but it can still generate incorrect or misleading responses. Users should exercise critical thinking and verify information from reliable sources.
Why can’t ChatGPT guarantee 100% accuracy?
Dependency on training data: ChatGPT learns from patterns in the training data it was exposed to, but the data may contain errors, biases, or incomplete information. As a result, the model may generate responses that are factually incorrect or misleading.
• Lack of critical thinking and common sense: While ChatGPT can generate coherent and contextually relevant responses, it lacks true understanding, critical thinking, and common-sense reasoning. This can sometimes lead to responses that may sound plausible but are not entirely accurate or appropriate in certain situations.
• Interpretation of ambiguous queries: ChatGPT may interpret user queries in ways that humans might not intend or expect. Ambiguous or poorly worded queries can lead to inaccurate or irrelevant responses. Clarifying and refining queries can help mitigate this issue.
• Lack of real-time or domain-specific information: ChatGPT’s training data is not constantly updated, and it has a knowledge cutoff. Therefore, it may not have access to the latest information or developments in various fields, including news, research, or industry-specific knowledge.
It is important to approach responses from ChatGPT with a critical mindset. You must verify information from reliable sources, and exercise judgment when evaluating the accuracy and reliability of the generated content. While AI models aim to provide helpful information, it’s always wise to cross-reference and consult multiple sources for important or sensitive matters.
It’s important to remember that while ChatGPT is a powerful tool, it has limitation. It should only be used as an aid for generating ideas, seeking information, or engaging in casual conversation.
Related article
Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedIn, Facebook, and Twitter. Email the Bach Seat here.

Chronological feeds,
The first reason could be the meteoric rise of generative artificial intelligence which Meta has no answer for. Threads represents a huge new opportunity for Meta to gather training data for its own AI technology. This new data could help it catch up to industry leaders.
It takes vast amounts of data to train a generative artificial intelligence. A generative artificial intelligence is a system that can produce new text, images, video and other outputs like code and music on their own. These systems rely on the data used to train it, and can reflect any biases, errors or falsities inherent in the original dataset. By mandating Threads access through Instagram’s
2024 Elections
The artificial intelligent chatbot from
Did you ask it for a strong password for your checking account? ChatGPT remembers.
According to the Singapore based firm, attackers are using the
The second step is to use a strong and unique password for your ChatGPT account. Use a combination of uppercase and lowercase letters, numbers, and special characters. Avoid using easily guessable passwords or reusing passwords from other accounts. Use a password manager to generate and store complex passwords that are hard to guess or crack.

Another step is to monitor your ChatGPT activity and report any suspicious or unauthorized actions. You can check your chat history and settings on the ChatGPT website or app. If you notice anything unusual, such as messages you didn’t send or changes you didn’t make, contact ChatGPT support immediately and change your password.
Multi-factor authentication is the gold standard for securing your online accounts. You should enable 2FA whenever possible. 2FA adds an extra layer of security by requiring an additional verification step, such as a unique code sent to a mobile device, to access the account. But ChatGPT does not offer this basic security tool.