Google hopes AI can turn search into conversation

Google often uses files Its annual developer conference, I/O, to showcase Artificial intelligence With an amazing factor. In 2016, she presented google house smart speaker with google Assistant. in 2018, Duplex For the first time to answer calls and schedule appointments for companies. In keeping with this tradition, Sundar Pichai, CEO, last month, LaMDA introduced artificial intelligence “designed to make conversation about any topic.”

In an onstage demonstration, Pichai showed what it’s like to speak with a kite and the celestial body Pluto. For each query, LaMDA answered with three or four sentences that are supposed to resemble a normal conversation between two people. Pichai said that over time, LaMDA could be integrated into Google products including Assistant, Workspace, and most importantly, Search.

“We believe that LaMDA’s natural conversational capabilities have the potential to make information and computing more accessible and usable,” Pichai said.

LaMDA’s demo offers a window into Google’s vision for search that goes beyond the list of links and could change how billions of people search the web. This vision focuses on artificial intelligence that can infer meaning from human language, engage in conversation, and answer multifaceted questions like experts.

Also at I/O, Google introduced another AI tool, called Multitask Unified Model (MUM), which can look at searches with text and images. VP Prabhakar Raghavan said that one day users could take a picture of a pair of shoes and ask the search engine if the shoes are good to wear while climbing Mount Fuji.

MUM produces results across 75 languages, which Google claims gives it a more comprehensive understanding of the world. An onstage demo showed how MUM would respond to the search query “I’ve hiked Mt. Adams now wants to hike Mt. Fuji next fall, what should I do differently?” This search query is phrased differently than you might search in Google Today because MUM aims to reduce the number of searches required to find an answer. MUM can summarize and create text; He’ll know comparing Mount Adams to Mount Fuji and this trek setup may require research results for fitness training, hiking equipment recommendations and weather forecasts.

In a paper entitledRethinking Research: The Experts Output of DilettantesPublished last month, four Google Research engineers envisioned the research as a conversation with human experts. An example in the paper addresses the research “What are the health benefits and risks of red wine?” Today, Google responds with a list of points. The paper suggests that future response might It looks more like a paragraph saying red wine promotes cardiovascular health but stains your teeth, citing and links to sources of information.The paper shows the response as text, but it’s easy to imagine oral responses too, like today’s experience with Google Assistant.

But relying more on artificial intelligence to decode text also carries risks, because computers still struggle to understand language in all its complexities. More advanced artificial intelligence for tasks such as creating text or answering questions, known as large language models, has shown a tendency to amplify bias and generate unexpected or toxic text. one such model, open aiGPT-3 was used to create files Interactive stories for cartoon characters But he also has Text created around child sex scenes In an online game.

as part of the paper and experimental Posted online last year by researchers from MIT, Intel and Facebook, they found that large language models show biases based on stereotypes about race, gender, religion and occupation.

As text generated by these models becomes more persuasive, says Rachel Tatman, a linguist with a PhD in the ethics of natural language processing, it can lead people to believe they are speaking using artificial intelligence that understands the meaning of the words in it. Generate – when in fact he does not have a logical understanding of the world. This can be a problem when it generates malicious text people with disabilities or Muslims or ask people commits suicide. Growing up, Tatman remembers that a librarian taught him how to judge the correctness of Google search results. She adds that if Google combines large language models with search, users will have to learn how to evaluate conversations with AI experts.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button