Technology

The race to understand the exciting and dangerous world of AI language


Among other things, this is what Gebro, Mitchell and five other scientists warn about in their paper, which calls LLMs “random parrots.” “Language technology can be very useful when it is appropriately scaled, positioned and framed,” says Emily Bender, a professor of linguistics at the University of Washington and one of the authors of the paper. But the general-purpose nature of the LLM – and convincing to imitate it – lures companies to use it in areas for which they are not necessarily equipped.

In a final keynote at one of the biggest AI conferences, Gebru linked this rapid deployment of LLMs to the consequences it has gone through in her private life. Gebro was born and raised in Ethiopia, where he was born An escalating war The Tigray region in the far north is devastated. Ethiopia is also a country where 86 languages ​​are spoken, almost all of which are non-existent in the predominant language technologies.

Although LLM has these linguistic shortcomings, Facebook relies heavily on it to automate content moderation globally. When war broke out in Tigray for the first time in November, Gibro saw the flat platform to overcome a wave of disinformation. This is symbolic of a persistent pattern that researchers have observed in moderation in content. Societies that Speak languages That which has not been identified by Silicon Valley suffers from the most hostile digital environments.

Gibro noted that this isn’t where the damage ends, either. When fake news, hate speech, and even death threats are not overseen, they are deleted as training data for building the next generation of LLMs. And those patterns, which repeat what they were trained in, end up throwing up these toxic language patterns on the Internet.

In many cases, researchers have not conducted a thorough enough investigation to see how this toxicity might appear in downstream applications. But some scholarships do exist. In her 2018 book Persecution algorithmsSafia Noble, Associate Professor of African American Information and Studies at the University of California, Los Angeles, has documented how biases embedded in Google searches perpetuate racism and, in extreme cases, may motivate racial violence.

“The consequences are very dire and important,” she says, “Google is not just an essential gateway to knowledge for ordinary citizens.” It also provides an information infrastructure for institutions, universities, state and federal governments.

Google is already using LLM to improve some of its search results. With its recent announcement on LaMDA and A recent proposal Published in a prepress paper, the company made clear that it would only increase its reliance on the technology. Noble’s concerns that this may make the problems she uncovered worse: “The fact that Google’s ethical AI team was fired for raising very important questions about racial and gendered patterns embedded in the big language models should have been a wake-up call.”

BigScience

The BigScience Project started as a direct response to the growing need for scientific scrutiny of LLMs. In observing the rapid spread of technology and Google’s attempts to censor Gebru and Mitchell, Wolf and several colleagues realized it was time for the search community to take matters into its own hands.

Inspired by open scientific collaborations like CERN in particle physics, they conceived an idea for an open source LLM that could be used to conduct critical research independent of any company. In April of this year, the group received a grant to build it using the supercomputer of the French government.

In tech companies, an LLM is often created by only six people who have primarily technical expertise. BigScience wanted to bring in hundreds of researchers from a wide range of countries and disciplines to participate in the process of building a truly collaborative model. Wolf, who is French, first approached the French NLP community. From there, the initiative doubled to become a global operation with more than 500 people.

The cooperative is now loosely organized into dozens of work and number groups, each addressing different aspects of model development and investigation. One group will measure the environmental impact of the model, including the carbon footprint of training, running the LLM, and calculating the supercomputer lifecycle costs. Another will focus on developing responsible methods of obtaining training data – looking for alternatives to simply scraping data from the web, such as transcribing historical radio archives or podcasts. The goal here is to avoid toxic language and the non-sensory gathering of private information.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button