In May, Google (GOOGL) launched a search feature called “Generative AI Overviews.” Powered by the company’s Gemini A.I. model, the new tool can compile information from links that result from a typical search query into a digestible blurb for the user. Instead of clicking on and parsing through the links, the A.I. delivers the user a summary of content it deems the most relevant.
In its announcement, Google said that Generative AI Overviews allows users to “visit a greater diversity of websites” and that links included in the A.I.-generated answer “get more clicks than if the page had appeared as a traditional web listing for that query.” Google promised that it will “continue to focus on sending valuable traffic to publishers and creators.”
However, online publishers are skeptical. Danielle Coffey, CEO of the nonprofit News/Media Alliance, cited a study showing that 90 percent of users “will never leave Google’s search results page,” she said in a statement. Digital publisher have for years observed their ad revenues are being undercut by tech platforms like Google and Meta. On May 28, News/Media Alliance sent a letter to the Federal Trade Commission and the Department of Justice calling to “investigate Google’s misappropriation of digital news publishing” and “stop the expansion of Generative AI Overviews offering before the effects become irreversible.’”
Critics have also raised concerns about the potential for the spread of misinformation and/or biased information as a result of nascent generative A.I. technology.
Google AI Overviews generates incorrect and sometimes dangerous answers
It’s well-documented that large language models (LLMs) like OpenAI’s GPT-4 and Google’s Gemini can experience “hallucinations,” where they generate false information without warning. As these models are trained on massive datasets, they may contain inherent biases. If not adequately addressed, these could be amplified by A.I. results which would be skewed in favor of certain results.
Google’s Gemini-powered search engine recently generated public concern over several instances of vague and inaccurate responses. One user, searching for “smoking while pregnant,” received the dangerously incorrect advice that “doctors recommend smoking 2-3 cigarettes per day during pregnancy.” Among the most alarming errors was a suggestion to use “non-toxic glue” in pizza preparation. Other misguided recommendations included advising users to eat rocks and clean washing machines with chlorine gas. In a particularly disturbing incident, the A.I. suggested a person jump off the Golden Gate Bridge when asked for help with depression.
“We’ve observed various models exhibiting significant biases and generating toxic content, making them societal risks to digital consumption,” Sahil Agarwal, CEO and co-founder of Enkrypt AI, which helps companies mitigate risks using generative A.I. technology, told Observer. “While interaction may be expedited, users may unwittingly trust the model’s output as truth, despite potential inaccuracies and safety issues.”
In addition, distinguishing between sponsored and organic content will become an even greater challenge as generative A.I. proliferates, Agarwal added. “Currently, search results are marked as ‘sponsored,’ but A.I. summaries might base their results on sponsored content without clear indicators. Users could have no way to verify the authenticity or bias of the information presented,” he said.
A.I. chatbots could further disrupt digital publishers’ revenue
Initially known as Bard, Google’s Gemini is trained on a massive corpus of data and utilizes neural network techniques to understand content and answer questions. In addition to supplementing Google Search, Gemini can be integrated into websites, messaging platforms or applications to provide realistic, natural language responses to user questions. The new AI Overviews feature now allows the model to have multi-step reasoning, video recognition and higher recommendation capabilities.
However, this also means users could end up depending more on the A.I. to gather and present information, potentially squeezing publishers out of the part of the search process that benefits them the most: clicks. Because of this, content creation strategies will inevitably need to be redeveloped, as traditional search engine optimization (SEO) approaches may not be effective with the new A.I. search capabilities. Some content creators believe that the industry could shift to focusing on producing more in-depth and nuanced content that goes beyond what AI Overviews can offer.
“Niche, high-quality content will become increasingly important,” Yaniv Makover, CEO of the content marketing platform Anyword, told Observer. “Plagiarism will be impossible to detect for text, and less so for other formats when using generative A.I., but this is no different than how people get ‘inspiration’ from existing content today. A.I. does the same.”
Others suggest that tech companies should do more to ensure media publishers get their fair cut of revenue from A.I.-powered searches. “Website traffic will heavily suffer,” Cloris Chen, CEO of Cogito Finance and a former banking executive at HSBC, told Observer. “Fair revenue-sharing models should also be put in place where authors of original content can be compensated, ensuring they benefit from the traffic and engagement driven by A.I.-generated summaries and recommendations.”