January 31, 2023

Ever since the launch of OpenAI’s ChatGPT, Google has been scrambling to build a rival. Now, Google’s subsidiary DeepMind, known for its pioneering work in AI research, has announced plans to launch a new chatbot called Sparrow. The company, which was acquired by Google nine years ago, is planning to release Sparrow for a “private beta” in 2023.

Google first introduced Sparrow as a proof-of-concept in a research paper last year and is now marketing it as a “dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers.” This is a direct response to the concerns experts have raised about the potential dangers of chatbots, such as the spread of inaccurate or invented information.

Sparrow’s focus on safety and source citation

According to DeepMind CEO Demis Hassabis, the slight delay in Sparrow’s launch is to ensure that it has specific features ChatGPT lacks. Most notably, Sparrow can cite specific sources when providing answers, something that ChatGPT currently does not do. Hassabis said, “it’s right to be cautious on that front.” Early tests of Sparrow have shown that it can provide a plausible answer and support it with evidence 78% of the time when asked a factual question.

Additionally, DeepMind has focussed on Sparrow’s behaviour-constraining rules; as well as its willingness to decline to answer questions in “contexts where it is appropriate to defer to humans.” This contrasts with ChatGPT, which has gone viral for its impressive ability to help with a wide range of tasks but has also caused alarm with its capacity for discriminatory comments and malware-writing skills.

See also  Chinese search giant Baidu set to take on ChatGPT with its own AI Chatbot

However, as with any chatbot, Sparrow’s true test would be the public beta and how it responds to questions which may not be appropriate to answer. But, to address these concerns, DeepMind said they are developing better rules for Sparrow, which “will require both expert input on many topics (including policymakers, social scientists, and ethicists) and participatory input from a diverse array of users and affected groups.” Sam Altman, CEO of OpenAI, has similarly talked about difficulties in opening up AI chatbots without causing collateral damage.