Google’s cautious approach hampers its progress in AI chatbots

Two years ago, some Google researchers urged the company’s leadership to release a “super chatbot” built on a new generation AI technology called LaMDA (Language Model for Dialogue Applications). LaMDA was considered more powerful than any existing chatbot tool at the time. The program took the form of a two-way conversation, capable of discussing philosophy, entertainment, or engaging in witty wordplay with users.

Google has been developing AI chatbots for several years, but limited their testing to internal use, missing out on the opportunity when OpenAI swiftly announced ChatGPT.

Daniel De Freitas and Noam Shazeer, two Google AI researchers, asserted that chatbots utilizing LaMDA would revolutionize and transform internet search habits as well as computer interactions. They recommended granting access to third-party developers, integrating it into the Google Assistant virtual assistant, and releasing a public trial.

Alphabet merges DeepMind and Google Brain AI research units

However, that did not happen. According to the WSJ, both researchers faced repeated refusals from the company’s leadership. One reason cited was that the program did not meet the safety and fairness standards expected of an AI system. Both experts left Google in 2021 and subsequently founded their own companies to continue researching similar technologies. They expressed regret that the chatbot couldn’t be released to the public.

Now, Google finds itself playing catch-up as ChatGPT takes the spotlight. Microsoft, a company that made significant investments in OpenAI, also unveiled Bing AI based on ChatGPT. Google has been pushed into a defensive position and hastily introduced a similar tool named Bard, even though it remains unfinished and provided incorrect answers on its launch day.

As a pioneering company, Google has always maintained a cautious approach. This is believed to have been shaped by years of research and evaluating the biases and shortcomings of AI. Last year, the company even fired engineer Blake Lemoine after he claimed that AI had consciousness. In January, Jeff Dean, Google’s Head of AI, told employees in an internal meeting that the company would face “significant reputation risks” and therefore needed to act more cautiously “compared to a small startup.”

“Google is struggling to find the balance between the level of risk they must accept and maintaining their leading position in the world,” said Gaurav Nemade, former Google Product Manager who worked on chatbots until 2020, in an interview with the WSJ.

The Journey of Google’s AI Chatbot Development

Google’s ambition to build intelligent chatbots began in 2013 when co-founder Larry Page hired renowned computer scientist Ray Kurzweil, who believed that someday machines would surpass human intelligence, calling it “the singularity.” Kurzweil embarked on developing several chatbots, including one named Danielle, inspired by a novel he was writing at the time.

Google also acquired DeepMind, a company with the goal of creating artificial intelligence or software that could “emulate human-like intelligence.” DeepMind achieved fame with AI projects like AlphaFold, which successfully decoded protein structures, and AI training for playing games such as soccer.

However, a series of pressures prevented Google from fully realizing its chatbot project. In 2018, the company pledged not to use AI in military weapons following employee backlash against the Project Maven contract with the US Department of Defense, which involved AI’s ability to autonomously identify and track targets like drones or vehicles. CEO Sundar Pichai later announced an AI principles framework aimed at limiting the proliferation of technologies that exhibit unfair biases, discrimination, and other harmful actions.

Google’s New Meena chatbot, Supersedes Existing AI-Assistants

As most of Google’s AI projects had to be halted or conducted in secrecy, De Freitas, a Brazilian-born engineer working at YouTube, started a side project on AI. He built Meena, a chatbot capable of mimicking human conversation and exhibiting more natural communication than any other chatbot.

For years, Meena remained undisclosed. Internally, many Google employees were concerned about its potential dangers, particularly following Microsoft’s need to shut down the Tay chatbot due to users teaching it offensive language and racial biases.

Meena only surfaced in 2020 when Google revealed that the chatbot had been trained on 40 billion words, primarily collected from conversations on social media and within permissible bounds. The development team sought permission from company leadership to release the chatbot publicly but continued to face rejection.

Nevertheless, the team didn’t give up. In 2020, the Google Brain AI research team took over the project and renamed it LaMDA. They also enhanced Transformer, an AI architecture trained through self-supervision that Google had developed since 2017 and is considered the underlying foundation for chatbots like ChatGPT.

However, the AI chatbot development faced challenges when former employees filed lawsuits against the company. In May 2021, Jeff Dean, the head of Google’s research division, affirmed the company’s continued investment in responsible AI development and pledged to double the scale of the AI ethics team.

The Cautionary Approach Leading to Slow Progress

LaMDA has been hailed as the most advanced language model and the “heart” of several Google-developed AIs, including Bard. According to Blake Lemoine, the company considered releasing a trial chatbot using this technology in May 2022. However, his assertion that this AI possessed consciousness sparked internal controversies, leading to his termination.

Both experts, Freitas and Shazeer, also aimed to incorporate LaMDA into the Google Assistant virtual assistant since 2020, recognizing its significant potential. At that time, Assistant had over 500 million users across smartphones, tablets, smart speakers, and TVs. They conducted internal testing and planned to release a public demo. However, once again, Google executives intervened and prevented it.

“It caused turmoil at Google,” Shazeer said. “Eventually, we decided that maybe we’d have better luck researching this technology as a startup.”

While Google exercised caution with AI chatbots for many years, they rushed to introduce Bard before it was fully refined. However, the tool is currently undergoing internal testing and has not been publicly released to users.

According to Elizabeth Reid, Vice President of Search at Google, chatbot accuracy remains a significant concern. AI-based models like these tend to produce strange responses when they lack sufficient information, a phenomenon referred to as “hallucination” by researchers. In some cases, software built on LaMDA technology has responded with fictional or off-topic information.

“It’s like talking to a child,” Reid said. “If a child thinks they need to answer a question but has nothing in their head, they’ll make up an answer that sounds plausible.”

Prabhakar Raghavan, the head of Google Search, also warned that current AI chatbots could create information pitfalls for users. “The kind of AI people are talking about can sometimes lead to what’s called hallucination. This is where the machine supplies a very plausible-sounding answer that is completely made up,” he stated in early February.

Raghavan mentioned that Google will continue to refine its chatbot cautiously at this time. “We will be cautious in addressing concerns in the ecosystem, which is what we plan to focus on the most,” he added.

Leave a Reply

Your email address will not be published. Required fields are marked *