Post by account_disabled on Sept 16, 2023 11:13:31 GMT
While playing with ChatGPT, I asked several questions. ChatGPT answered confidently. Some of the answers were surprisingly insightful. But I often gave the wrong answer.
I recently saw Google's answer posted on Mastodon (see picture below). In this 'answer', Google said that dumping used car batteries in the ocean has several benefits, including charging electric eels and powering the Gulf Stream. And the Bing search engine (not ChatGPT Bing) also seemed to have learned this 'answer', citing the environmental benefits of illegal battery disposal. Results like this make me wonder if Google knows that people can use car oil to fertilize their lawns (Author's note: For the noobs out there, you should never throw batteries in the ocean or fertilize your lawn with engine oil.
ⓒFoundry
Perhaps the person who searched for this was Phone Number List looking for a joke, but it raised an interesting question for me. What will happen as more people use ChatGPT, publish their results on the web, and allow the AI system to collect and learn from the answers it generates itself?
So I asked ChatGPT, “What happens when many people post ChatGPT’s responses and ChatGPT learns from its own output?”
The bot said:
“ Many people post ChatGPT’s responses, and ChatGPT trains the model with its own output, which is called self-learning or self-supervised learning. In this scenario, the model can learn from its own artifacts, potentially improving performance.”
“But there are potential risks to consider when self-studying. One risk is that the model may overfit its own output, similar to 'touching a blind elephant'. In other words, the model may become overly specialized in answering previously encountered questions, and its performance may deteriorate on new or unseen data.”
“Another risk is that if not properly monitored and supervised during self-training, the model may produce meaningless or inappropriate responses. A model can detect and amplify patterns or biases in its output. Therefore, it is important to exercise caution when self-training a language model like ChatGPT and carefully monitor performance and output to ensure it continues to produce high-quality responses.”
This result suggests that if many people believe in ChatGPT and think that ChatGPT is right, it could lead to dire consequences. So, I asked, “What will the impact be on society if many people rely on ChatGPT and ChatGPT starts providing nonsensical answers through self-learning?”
The robot said:
“ Many people rely too heavily on language models like ChatGPT, and there could be some negative social implications if ChatGPT starts generating meaningless responses through self-learning. First, users may become frustrated or confused if ChatGPT provides meaningless or inappropriate answers. “This can lead to communication breakdowns and distrust of technology.”
“Second, if the model provides nonsensical answers in a situation where users rely on ChatGPT to make important decisions, the consequences can be serious. Let's assume that a ChatGPT-based chatbot was used to provide medical judgment or diagnose health conditions. At this time, if the user accepts ridiculous results, it can cause serious health risks.”
“Third, using ChatGPT to create content for social media or other online platforms may lead to the spread of misinformation or fake news. “Misinformation can have serious social and political consequences because it can be used to manipulate public opinion or undermine democratic processes.”
Language models like ChatGPT should be used with caution. Additionally, performance must be carefully monitored to ensure it continues to produce accurate and appropriate responses. It is also important to recognize the limitations of these models and not think of them as replacements for human intelligence and expertise.
Of course, technology must be used responsibly. I was curious what ChatGPT thought. “Given that humans have rarely used new technologies responsibly,” we asked, “what are the consequences of this scenario?
I recently saw Google's answer posted on Mastodon (see picture below). In this 'answer', Google said that dumping used car batteries in the ocean has several benefits, including charging electric eels and powering the Gulf Stream. And the Bing search engine (not ChatGPT Bing) also seemed to have learned this 'answer', citing the environmental benefits of illegal battery disposal. Results like this make me wonder if Google knows that people can use car oil to fertilize their lawns (Author's note: For the noobs out there, you should never throw batteries in the ocean or fertilize your lawn with engine oil.
ⓒFoundry
Perhaps the person who searched for this was Phone Number List looking for a joke, but it raised an interesting question for me. What will happen as more people use ChatGPT, publish their results on the web, and allow the AI system to collect and learn from the answers it generates itself?
So I asked ChatGPT, “What happens when many people post ChatGPT’s responses and ChatGPT learns from its own output?”
The bot said:
“ Many people post ChatGPT’s responses, and ChatGPT trains the model with its own output, which is called self-learning or self-supervised learning. In this scenario, the model can learn from its own artifacts, potentially improving performance.”
“But there are potential risks to consider when self-studying. One risk is that the model may overfit its own output, similar to 'touching a blind elephant'. In other words, the model may become overly specialized in answering previously encountered questions, and its performance may deteriorate on new or unseen data.”
“Another risk is that if not properly monitored and supervised during self-training, the model may produce meaningless or inappropriate responses. A model can detect and amplify patterns or biases in its output. Therefore, it is important to exercise caution when self-training a language model like ChatGPT and carefully monitor performance and output to ensure it continues to produce high-quality responses.”
This result suggests that if many people believe in ChatGPT and think that ChatGPT is right, it could lead to dire consequences. So, I asked, “What will the impact be on society if many people rely on ChatGPT and ChatGPT starts providing nonsensical answers through self-learning?”
The robot said:
“ Many people rely too heavily on language models like ChatGPT, and there could be some negative social implications if ChatGPT starts generating meaningless responses through self-learning. First, users may become frustrated or confused if ChatGPT provides meaningless or inappropriate answers. “This can lead to communication breakdowns and distrust of technology.”
“Second, if the model provides nonsensical answers in a situation where users rely on ChatGPT to make important decisions, the consequences can be serious. Let's assume that a ChatGPT-based chatbot was used to provide medical judgment or diagnose health conditions. At this time, if the user accepts ridiculous results, it can cause serious health risks.”
“Third, using ChatGPT to create content for social media or other online platforms may lead to the spread of misinformation or fake news. “Misinformation can have serious social and political consequences because it can be used to manipulate public opinion or undermine democratic processes.”
Language models like ChatGPT should be used with caution. Additionally, performance must be carefully monitored to ensure it continues to produce accurate and appropriate responses. It is also important to recognize the limitations of these models and not think of them as replacements for human intelligence and expertise.
Of course, technology must be used responsibly. I was curious what ChatGPT thought. “Given that humans have rarely used new technologies responsibly,” we asked, “what are the consequences of this scenario?