Tag Archive for ChatGPT

ChatGPT Cannot Do Everything

ChatGPT Cannot Do EverythingChatGPT, short for Chat Generative Pre-Trained Transformer is the artificial intelligence chatbot from OpenAI.  It is the fastest growing app in Internet history. ChatGPT has over 100 million users globally as of June 2023. Almost 53% of the audience believed ChatGPT generated content in the field of finance, health, technology, travel, and entertainment to be created by humans or edited by humans. These types of numbers made me wonder what ChatGPT can’t do. I asked ChatGPT about some of its limitations and here what it had to say.

ChatGPT can’t provide real-time information

ChatGPT’s knowledge is based on the data available up until September 2021. It doesn’t have access to current news or events that have occurred after its knowledge cutoff.

Why is ChatGPT’s knowledge cut-off in September 2021?

  • The knowledge cutoff exists because the model has a fixed dataset up to a certain point, which in this case is September 2021. After this date, any events, developments, or changes that occurred in the world are not part of the model’s training data. Consequently, ChatGPT cannot access information or knowledge beyond that cutoff date.
  • The reason for the knowledge cutoff is practicality. Training a language model like ChatGPT requires significant computational resources and time. As new data becomes available, it needs to be processed, curated, and used to fine-tune the model, which is a resource-intensive process. To ensure that the model remains up-to-date, periodic retraining is necessary, but it cannot be done instantaneously.

ChatGPT cannot predict the future

ChatGPT cannot accurately predict future events or outcomes. It doesn’t have access to real-time data or the ability to forecast future events.

Why can’t ChatGPT predict the future?

  • Uncertainty and complexity of the future: The future is inherently uncertain and complex. It depends on a multitude of variables, human actions, and unpredictable events. AI models like ChatGPT cannot accurately capture all these intricate factors and their interplay, making precise predictions challenging.
  • Ethical and practical concerns: Making accurate predictions about the future raises ethical and practical concerns. The potential for misuse, misinformation, or undue influence is significant. AI models are cautious about making definitive future predictions to avoid creating false expectations or spreading unreliable information.
  • It’s important to note that while AI models may provide probabilistic insights based on historical patterns, they should not be relied upon as crystal balls for predicting specific future events or outcomes. Expert human judgment, domain knowledge, and up-to-date information from reliable sources are crucial for making accurate predictions.

Offer professional advice

In many of these cases ChatGPT will offer general advice, but it also includes a disclaimer, “While I cannot provide personalized advice, here are some points to consider.

Why can’t ChatGPT offer professional advice? 

  • Lack of specialized training: ChatGPT is a general-purpose language model and is not trained in any particular profession or domain. It lacks the specialized knowledge and training required to provide accurate and reliable advice in professional areas such as law, medicine, finance, or engineering.
    Risk of misinformation: Providing professional advice without proper training and expertise can lead to misinformation and potential harm. Incorrect advice can have serious consequences in fields where accuracy is critical for making decisions that affect people’s health, finances, or legal rights.
    • Legal and ethical concerns: Offering professional advice without the necessary qualifications can be ethically problematic and may even violate laws or regulations in some jurisdictions. It’s essential to respect professional boundaries and ensure that advice comes from qualified and licensed professionals.
    • Dynamic nature of professions: Many professional fields are constantly evolving, with new research, practices, and regulations being introduced. AI language models like ChatGPT do not have real-time access to the latest updates and may not be aware of recent developments in specific professions.

Guarantee 100% accuracy

ChatGPT aims to provide helpful and relevant information, but it can still generate incorrect or misleading responses. Users should exercise critical thinking and verify information from reliable sources.

Why can’t ChatGPT guarantee 100% accuracy?

  • Dependency on training data: ChatGPT learns from patterns in the training data it was exposed to, but the data may contain errors, biases, or incomplete information. As a result, the model may generate responses that are factually incorrect or misleading.
    • Lack of critical thinking and common sense: While ChatGPT can generate coherent and contextually relevant responses, it lacks true understanding, critical thinking, and common-sense reasoning. This can sometimes lead to responses that may sound plausible but are not entirely accurate or appropriate in certain situations.
    • Interpretation of ambiguous queries: ChatGPT may interpret user queries in ways that humans might not intend or expect. Ambiguous or poorly worded queries can lead to inaccurate or irrelevant responses. Clarifying and refining queries can help mitigate this issue.
    • Lack of real-time or domain-specific information: ChatGPT’s training data is not constantly updated, and it has a knowledge cutoff. Therefore, it may not have access to the latest information or developments in various fields, including news, research, or industry-specific knowledge.

It is important to approach responses from ChatGPT with a critical mindset. You must verify information from reliable sources, and exercise judgment when evaluating the accuracy and reliability of the generated content. While AI models aim to provide helpful information, it’s always wise to cross-reference and consult multiple sources for important or sensitive matters.
It’s important to remember that while ChatGPT is a powerful tool, it has limitation. It should only be used as an aid for generating ideas, seeking information, or engaging in casual conversation.

 

How you can help Ukraine!

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Why Meta Released Threads Now

Unless you sleeping under a rock, you may have heard that Facebook / Meta / Instagram (META) has released an alternative to Twitter, called Threads. Other have found the app to be incomplete. Even Instagram CEO Adam Mosseri wrote there are “tons of basics” missing.

What Threads is missing

Reviewers say the app is missing functions like:

  • The Threads app is missing functionsChronological feeds,
  • Direct messaging between users,
  • The ability to edit a post,
  • Following feeds,
  • Hashtags,
  • Search capabilities, and
  • Web version.

The app seems rushed. Why did Zuckerburg push an incomplete product out the door now?

ChatGPT

catch up to industry leadersThe first reason could be the meteoric rise of generative artificial intelligence which Meta has no answer for. Threads represents a huge new opportunity for Meta to gather training data for its own AI technology. This new data could help it catch up to industry leaders. Microsoft (MSFT) has added OpenAI into a Microsoft Bing chatbot. Google (GOOG) is also working on a chatbot named Bard.

Meta has released AI chatbots in the past. But they were not very good. One, named BlenderBot, was criticized for being simply… not very good. Another, code-name Galactica whose goal was to use machine learning to understand and organize science for its users. Facebook fed it 48 million science papers. It created scientific nonsense, or just provided incorrect information. It struggled to understand or compute math at the grade-school level. Researchers shut down the system after just two days.

Train a chatbotIt takes vast amounts of data to train a generative artificial intelligence. A generative artificial intelligence is a system that can produce new text, images, video and other outputs like code and music on their own. These systems rely on the data used to train it, and can reflect any biases, errors or falsities inherent in the original dataset. By mandating Threads access through Instagram’s 2.35 billion users, Meta can instantly gain all of Instagram’s data to feed it’s artificial intelligence. By feeding the data from Threads and Instagram into it AI, Meta has significantly increased it ability to train AI to take on OpenAI, Microsoft, and Google.

published disinformation produced by a Russian troll farm2024 Elections

Another possible reason Threads has surfaced now is the U.S. elections. Election season 2024 is heating up and it is estimated that $1.7 billion dollars will be spent on digital media for the elections. Surely Zuckerberg want to use Threads to grab another large slice of that pie.

It is important to remember the shameful role that Zuckerberg’s Facebook played in the 2016 election. During the 2016 election cycle Facebook published disinformation produced by a Russian troll farm to as many as 10 million people. Some of the ads were paid for in Russian currency. And his subsequent denial, saying that fake news on Facebook influenced the 2016 election was a “pretty crazy idea.” 2016 should be ring the warning bells for people who cherish democracy.

rb-

Maybe Zuk wants this to be the opening event leading up to the promised cage match between Zuk and fellow megalomaniac techbro Elon Musk.

Whatever reason Zuckerger had to push an incomplete product out he door, his history says it won’t be good for us.

How you can help Ukraine!

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

ChatGPT Hacking: What You Need to Know and Do

ChatGPT Hacking: What You Need to Know and DoChatGPT is an artificial intelligence chatbot. It can interact with users in a conversational way. It is powered by a large language model called GPT-4. GPT-4 can understand and generate natural language responses based on user prompts. People can use ChatGPT for various purposes, such as getting information, entertainment, education, or productivity. ChatGPT is reportedly the fastest-growing consumer application in history. 

Hackers are going after ChatGPTThe artificial intelligent chatbot from OpenAI has been the cool kid on the tech block since November 2022. Followers of the Bach Seat are smart enough to know what that means. Hackers are going after ChatGPT. Recent reports from cybersecurity researcher Group-IB have found over 100,000 ChatGPT logins for sale on the dark-web.

Attractive to attackers

The AI is using you to learn more things. Every time you interact with it, ChatGPT gathers more info about you. Unlike Google, which collects data on what you are doing, you are feeding your info into ChatGPT. The information ChatGPT gathers from you also makes its attractive to attackers.

you are feeding your info into ChatGPTDid you ask it for a strong password for your checking account? ChatGPT remembers.

Did you ask it about a medical condition? ChatGPT remembers it and added it to its “intelligence.”

Did you ask it to proofread your report for the boss? ChatGPT now knows all the confidential corporate info in your report.

Information-stealing malware

Attackers want that info too. They can scoop up the data from a hacked ChatGPT account. Hackers can use the stolen data to impersonate users, access their online accounts, steal their money or assets, blackmail them, or sell their information to other criminals or advertisers.

According to the Singapore based firm, attackers are using the Raccoon information-stealing malware to scoop up ChatGPT credentials. The Raccoon malware is a subscription based crimeware that attackers can license for as little as $200.00 a month and embed in a malware laden email. 

How to protect yourself from ChatGPT hackers

The first step is to be careful about what you share with ChatGPT. Don’t give it any personal or sensitive information that you wouldn’t want anyone else to know. Remember that ChatGPT is not a human friend, but a machine that can store and process your data.

The second step is to use a strong and unique password for your ChatGPT account. Use a combination of uppercase and lowercase letters, numbers, and special characters. Avoid using easily guessable passwords or reusing passwords from other accounts. Use a password manager to generate and store complex passwords that are hard to guess or crack.

Periodically change your ChatGPT password. This will minimize the risk of unauthorized access. Avoid using the same password for an extended period and ensure new passwords are strong and unique.

The third step is to configure ChatGPT for more privacy.

Clear Your ChatGPT Conversations: To keep the information you’ve shared with ChatGPT away from attackers, regularly clear your saved ChatGPT conversations. To clear your ChatGPT conversations:

  1. ChatGPT GeneralLog in to ChatGPT.
  2. Click on your account name in the bottom left corner of the ChatGPT interface.
  3. Click Clear all chats.
  4. Click again to Confirm.

All of your saved conversations should be deleted. This can limit the amount of data stored on ChatGPT, which can help reduce the impact in case of a data breach.

Turn off chat history and model training: You can prevent ChatGPT from using your personal info to grow the AI. To disable chat history and model training,

  1. Log in to ChatGPT.
  2. Click on your account name in the bottom left corner of the ChatGPT interface. 
  3. Click Settings.
  4. Click Data Controls.
  5. Toggle Chat history & training to off.

ChatGPT says that while history is disabled, new conversations won’t be used to train and improve our models and won’t appear in the history sidebar. They do retain all conversations for 30 days to monitor for abuse.

They also point out that this will not prevent unauthorized browser add-ons or malware on your computer from storing your history.
The other limitation is that this setting does not sync across browsers or devices. You will have to enable it in each device.

Another step is to monitor your ChatGPT activity and report any suspicious or unauthorized actions. You can check your chat history and settings on the ChatGPT website or app. If you notice anything unusual, such as messages you didn’t send or changes you didn’t make, contact ChatGPT support immediately and change your password.

Finally, educate yourself and others about the risks and benefits of using ChatGPT. Read the terms of service and privacy policy of ChatGPT before using it. Learn how ChatGPT works and what it can and can’t do. Share this blog post with your friends and family who use ChatGPT and help them stay safe online.

Where is MFA?

Multi-factor authentication is the gold standard for securing your online accounts. You should enable 2FA whenever possible. 2FA adds an extra layer of security by requiring an additional verification step, such as a unique code sent to a mobile device, to access the account. But ChatGPT does not offer this basic security tool.

rb-

We have seen this list after years and years of preaching account security. ChatGPT should receive the same level of attention you give to other sensitive accounts like your email, take the necessary steps to protect your ChatGPT account and yourself.

ChatGPT is an amazing technology that can enrich our lives and experiences. But like any other technology, it comes with some challenges and dangers that we need to be aware of and prepared for. By following these steps, you can enjoy chatting with ChatGPT without compromising your security or privacy.


How you can help Ukraine!

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.