AI generated answers in Google can perpetuate a cycle of misinformation. When users search, they may receive answers from websites or Google’s AI model, Bard. If these answers are wrong, users might accept them as truth and share them, unwittingly spreading the misinformation further.
This article delves into how AI-generated answers can create a misinformation loop within Google, impacting society and individuals. It also discusses collaborative efforts between Google, website owners, and content creators to promote best practices in generating and publishing ai generated content to combat this issue.
Misinformation in Google
Google’s search engine sometimes gives quick answers to questions by showing a Featured Snippet at the top of the results page, taken from websites it has crawled. On Monday, X user Tyler Glaiel noticed that Google’s answer to “can you melt eggs” was “yes”, taken from Quora’s ChatGPT feature, which uses an outdated version of OpenAI’s language model that often makes up false information.
The Google Search result shared by Glaiel and verified by Ars Technica said: “Yes, an egg can be melted. The most common way to melt an egg is to heat it using a stove or microwave.”
“This is actually hilarious. Quora SEO’d themselves to the top of every search result, and is now serving chatGPT answers on their page, so that’s propagating to the answers google gives.” SEO stands for search engine optimization, which is the practice of making a website’s content more likely to appear higher in Google’s search results.
At the time of writing, Google’s result to our “can you melt eggs” query was yes, but the answer to “can you melt an egg” query was no, both taken from the same Quora page titled “Can you melt an egg?” that does include the wrong AI-written text mentioned above—the same text that appears in the Google Featured Snippet.
Interestingly, Quora’s answer page says the AI-generated result comes from “ChatGPT”, but if you ask OpenAI’s ChatGPT if you can melt an egg, it will always tell you “no.”
It turns out that Quora’s AI answer feature does not actually use ChatGPT, but instead uses the earlier GPT-3 text-davinci-003 API, which is known to present false information more often than newer language models from OpenAI. After an X user named Andrei Volt first noticed that text-davinci-003 can give the same result as Quora’s AI bot, we replicated the result in OpenAI’s Playground development platform—it does indeed say that you can melt an egg by heating it.
The Problem of Misinformation
Misinformation, whether intentional or accidental, consists of false, inaccurate, or misleading information. It originates from diverse sources like fake news sites, social media, blogs, and more. It may serve malicious purposes, such as political propaganda or scams, profit motives like clickbait, or even arise from human error or cognitive bias.
Misinformation is false or inaccurate information that is spread, regardless of whether there is intent to deceive. It can be spread through a variety of channels, including social media, news websites, and word-of-mouth. One important step is to educate people about how to identify and avoid misinformation.
The Loop of Misinformation
Misinformation can escalate into a damaging loop within Google, where AI-generated answers reinforce each other’s falsehoods or biases due to the search algorithm and ad business. This hinders the discovery of accurate and reliable information. Here is how a loop of misinformation works:
- A user asks Google a question that has no clear or definitive AI generated answers, such as “Is climate change real?” or “Is COVID-19 vaccine safe?”.
- Google returns an AI generated answers that is based on false or biased information from the web, such as “Climate change is a hoax” or “COVID-19 vaccine causes autism”.
- When the user encounters the AI-generated answers atop search results, they often perceive it as trustworthy. They proceed to click the link to the website providing the answer to learn more.
- The website shows more false or biased information on the same topic, as well as ads from other websites that promote similar views.
- The user becomes convinced by the false or biased information and shares it with others on social media or other platforms.
- The false or biased information reaches more people who search for the same question on Google.
- Google rewards websites with more traffic, engagement, and relevance by granting them higher rankings, bids, and authority in its search algorithm and ad business.
- Google returns the same AI generated answers to more users who ask the same question, creating a feedback loop that amplifies and rewards the misinformation.
The Consequences of Misinformation
Misinformation is false, misleading, or inaccurate information. It can be spread intentionally or unintentionally, and it can have a variety of negative consequences. The loop of misinformation can have serious consequences for individuals and society as a whole. Misinformation can affect people’s lives, opinions, and negative ways, such as:
- Misinformation is dangerous and can lead to physical harm or death. It can discourage people from getting needed medical treatment and lead to panic, fraud, or dangerous situations.
- False information, especially about personal, social, identity, or moral matters, can cause emotional distress, depression, anger, isolation, discrimination, shame, and conflict.
- Misinformation can impair cognition and foster bias. It can make people ignore reality, believe conspiracy theories, and become intolerant and polarized.
The Solutions to Misinformation and AI Generated Answers
The loop of misinformation is not inevitable or irreversible. Fact-checking organizations can help to identify and debunk misinformation. They can also work to educate the public about misinformation and how to avoid it. There are ways to combat AI generated answers that are false or misleading on Google. These include:
- Fact checkers: Fact checkers verify and correct misinformation in AI-generated answers and offer reliable information sources. They use tools and platforms to enhance their efforts.
- Researchers: Researchers are studying misinformation and AI-generated answers on Google, proposing solutions, and using various methodologies to support their work.
- Regulators: Regulators can monitor and regulate AI-generated answers on Google. They can hold Google accountable for its actions and responsibilities regarding AI-generated answers.
- Users: Users can use tips and strategies from sources like Google, Full Fact, Rytr, Scalenut, and AISEO to improve their search experience.
AI models trained on AI-generated data can decline in quality over time, due to a phenomenon called “model collapse.” This is important to be aware of when developing and using AI models. If you want to know more about AI Generated Data Makes AI Go MAD checkout this article.
Frequently Asked Questions
Google’s AI-generated answers offer quick solutions but can also spread false or biased information, potentially creating a cycle of misinformation. Its poses significant risks to individuals and society, impacting their decisions and beliefs negatively. To address this, collaboration among fact-checkers, researchers, regulators, and users crucial.
In this article, we have explored how AI generated answers can form a loop of misinformation in Google, how this can affect various aspects of our society and our lives, and what we can do to stop it. We hope that this article has helped you to understand the problem and the solutions of AI generated answers that are false or misleading on Google.