Home News After mistakes and dissatisfaction, Google backs down and says bard is not a search engine

After mistakes and dissatisfaction, Google backs down and says bard is not a search engine

by Janes

Since ChatGPT was released in November 2022, it has been touted as a possible competitor to search sites. Google responded and announced Bard, its chatbot with artificial intelligence. The rush to show anything, though, displeased employees. Internally, executives try to calm the mood and now say that the product is not a search engine.

The statements were given on Thursday (2), when executives answered questions forwarded by Dory, the company’s internal forum. Jack Krawczyk, Bard’s product leader, said the Google product and ChatGPT are language models, not knowledge. “Bard is not the search,” he added.

Krawczyk also explained that artificial intelligences of this kind are great at generating texts that seem written by humans. The problem is the information: they can’t verify that the written facts are real.

Elizabeth Reid, vice president of search engineering, reinforced Krawczyk’s statements: “Bard and the search are separate things.” She recalled that Google has already integrated language models into search engines such as Bert and Mum.

They do not, however, function as chatbots—algorithms are used behind the scenes of indexing to better understand the content of each list page.

Google said Bard would be integrated into the search
Why, then, did Google get into this of wanting to compete with ChatGPT and Microsoft’s Bing Chat? For the product leader, you simply can’t stop users from trying to use products as well as search engines.

This is the first time the company has tried to separate Bard from the search. Since the chatbot was announced, executives say it will be integrated into the search service.

This started early in the launch blog post, which read, “We are working to bring these latest AI advances to our products, starting with Search.”

Problems and internal dissatisfaction
Since then, the tool has shown problems. In one of the examples of use, Bard said that the first images of an exoplanet were taken by the James Webb Space Telescope.

This information is incorrect: the first images of an exoplanet were taken by the Very Large Telescope (VLT) in 2004.

The error cost $100 billion in market value as the company’s shares plummeted when the error was pointed out by scientists.

Google then asked employees to take some time off to test Bard and rewrite incorrect answers.

Boring job, huh? But calm, there is prize: a medallion in the internal profile of the press and the chance to pass live feedbacks to the artificial intelligence team. Wow, that’s cool, that should do it!

The staff are dissatisfied, and if the intention was to calm them, thursday’s meeting (2) seems not to have helped at all.

According to CNBC, which obtained the information about the meeting, many workers said they were even more confused by the inconsistent responses of executives.

Bing Chat and ChatGPT also make mistakes
Displaying incorrect information is not bard’s exclusive. Bing Chat and ChatGPT do this very often.

Bing Chat has already insisted to a user that we were in 2022 and erred in summarizing information from a financial report and giving travel tips.

ChatGPT was used by the Cnet website to write finance articles, and it skidded badly when it came to doing math.

Maybe Krawczyk and Reid are really right. It’s best to use robots like that to help when writing. For information, nothing like the good old search—at least for now.

você pode gostar