Google’s ChatGPT rival, Bard, is facing criticism from some employees at the tech giant, raising concerns about the usefulness of the company’s generative AI projects. Bard was announced earlier this year as one of Google’s key chatbot offerings, developed in response to the launch of OpenAI’s ChatGPT. However, some Google designers, product managers, and engineers have expressed doubts about the value of Bard, as well as the significant resources being devoted to the project.
In an exclusive Discord chat, limited to invited participants, conversations about the bot’s practicality and the allocation of resources have been taking place. The discussions have highlighted the ongoing challenge of determining the true usefulness of language models (LLMs) like Bard. Cathy Pearl, a user experience lead for Bard, questioned the impact and helpfulness of LLMs in a chat message in August, stating, Like really making a difference. TBD!
Another senior product manager, Dominik Rabiej, admitted that he has reservations about trusting the output of LLMs without independent verification. While expressing a desire to reach a point where the output can be trusted, Rabiej acknowledged that the technology is not quite there yet.
One of the key concerns voiced by employees is related to the accuracy and reliability of AI-powered chatbots. Chatbots often generate false information or misrepresent facts, a phenomenon known as hallucinations. The recently launched Bard chatbot, for example, falsely claimed that there was a ceasefire between Gaza and Israel, despite the ongoing conflict between the two regions.
This is not the first time that Google employees have raised doubts about the company’s focus on generative AI. Insider previously obtained leaked audio recordings in which employees expressed concerns about the impact of Google’s aggressive pursuit of generative AI technology. The leaked recordings revealed questions from employees about the company’s overall strategy and whether it has become too reliant on AI.
Google has yet to respond to requests for comment regarding the concerns raised by its employees. However, these discussions shed light on the ongoing debate within the tech industry about the benefits and limitations of generative AI. As the field continues to evolve, companies like Google will need to address these concerns to ensure the development of reliable and useful AI applications.
In summary, Google’s ChatGPT rival, Bard, is facing criticism from some Googlers who are questioning the bot’s usefulness and the resources being dedicated to generative AI projects. Concerns have been raised about the accuracy of AI-powered chatbots, with examples of false information being generated. This is part of a broader discussion within the tech industry about the benefits and limitations of generative AI. Google has yet to provide a response to these concerns raised by its employees.