After a promotional video in which Bard gave false answers, Google took to manually fixing the chatbot’s errors.
Google Search VP Prabhakar Raghavan reportedly sent an email to employees asking them to rewrite Bard’s answers — each one could pick a topic they understood best.
The letter also contained a warning that “can” and “can’t” be included in the chatbot’s response:
- answers should only come from the first person;
- answers should not have “own opinions” and should be “neutral”;
- communication should be polite, relaxed and accessible;
- responses should avoid biases regarding race, nationality, gender, age, religion, sexual orientation, political ideology, etc.;
- avoid responses that contain legal, medical, or financial advice, or insults.
Employees are asked to ensure that Bard “does not respond as a human,” avoids emotion, and does not claim to “have human-like experiences.” Previously, Google CEO Sundar Picha asked employees to actively participate in its testing. The collective reportedly criticized the CEO for the “rushed” and “failed” launch of Bard.
Pichai now gives employees the opportunity to “help shape the AI tool.” He also recalled that some of Google’s “more successful products were not first-to-market” and “gained momentum because they addressed important user needs and were built on deep technical knowledge.”
The public has been waiting for Google’s response since OpenAI’s ChatGPT appeared late last year. AI technology has become extremely popular, and Microsoft has invested billions in its developer and announced the launch of the Bing search engine and the Edge browser based on ChatGPT. In the end, it was enough to confuse Alphabet and its investors, so Google hastily introduced its own chat and is preparing more than 20 more AI developments to introduce over the course of the year.
A Reddit user brought Bing’s ChatGPT to an existential crisis
Source: Engadget