Among the richest and most powerful companies in the world, Google, Facebook, Amazon, Microsoft and Apple have made AI essential parts of their businesses. Advances over the past decade, especially in an artificial intelligence technology called deep learning, allowed them to monitor the behavior of users; recommend news, information and products to them; And most of all, target them with ads. Last year, Google ad devices generated more than $140 billion in revenue. Facebook made $84 billion.
Companies have invested heavily in technology that has brought them such enormous wealth. Alphabet, the parent company of Google, has acquired an artificial intelligence lab in London deep mind for $600 million in 2014 and spends hundreds of millions annually to support its research. Microsoft signed a $1 billion deal with OpenAI in 2019 for the rights to market its algorithms.
At the same time, tech giants have become major investors in artificial intelligence research at universities, which has greatly affected their scientific priorities. Over the years, more and more aspiring scientists have moved on to work for tech giants full time or have adopted a dual affiliation. From 2018 to 2019, 58% of the most cited research papers at the two largest AI conferences had at least one author affiliated with the tech giant, compared to just 11% a decade ago, according to a study by researchers at Radical AI network, a group seeking to challenge the power dynamics of artificial intelligence.
The problem is that the corporate AI agenda has focused on technologies with commercial potential, largely ignoring research that can help tackle challenges such as economic inequality and climate change. In fact, it has made these challenges worse. The drive to automate tasks has cost jobs and given rise to heavy labor such as data cleaning and content editing. The creation of ever larger models has prompted a huge explosion in the power consumption of artificial intelligence. Deep learning has also created a culture in which our data is constantly scraped, often without consent, to train products like facial recognition systems. Recommendation algorithms have exacerbated political polarization, while large language models have failed to clean up misinformation.
This is the attitude that Gebru and a growing movement of like-minded scientists want to change. Over the past five years, they have sought to shift the field’s priorities away from simply enriching technology companies, by expanding who can participate in technology development. Their goal is not only to mitigate the damage caused by existing systems but to create a new, more equitable and democratic AI.
“Hello from Timnit”
In December 2015, Gebru sat down to write an open letter. Halfway through my PhD at Stanford, I attended the Neural Information Processing Systems Conference, the largest annual gathering of artificial intelligence research. Of the more than 3,700 researchers there, Gebru counted only five who were black.
With just a small gathering on a specialized academic topic, NeurIPS (as it is now known) became the largest annual AI job gain. The world’s richest companies were coming to flaunt demos, throw extravagant parties, and write huge checks to the rarest people in Silicon Valley: skilled researchers in artificial intelligence.
That year, Elon Musk arrived to announce the non-profit project open ai. He, Sam Altman, then-chairman of Y Combinator, and Peter Thiel, co-founder of PayPal, put together $1 billion to solve what they thought was an existential problem: the possibility that superintelligence might one day take over the world. Their solution: build a better superintelligence. Of the 14 advisors or members of the technical team, 11 were white men.
While Musk was partying, Gebru was dealing with humiliation and harassment. At a conference party, a group of drunk men in Google search shirts rotated them and subjected them to an unwanted hug, a kiss on the cheek, and a picture.
Gebru wrote a scathing critique of what she observed: the spectacle, the cult-like cult of AI celebrities, and most of all, the overwhelming uniformity. She wrote that this boy’s club culture has already pushed talented women out of the field. He was also leading the entire community towards a dangerously narrow concept of artificial intelligence and its impact on the world.
She noted that Google has already published a computer vision algorithm that classifies blacks as gorillas. The increasing sophistication of drones was putting the US military on the path to lethal autonomous weapons. But there was no mention of these issues in Musk’s grand plan to prevent artificial intelligence from taking over the world in some theoretical future scenarios. “We don’t have to plan in the future to see the potential negative effects of AI,” Gebru wrote. “It’s already happening.”
Gebru never published her thinking. But she realized that something needed to change. On January 28, 2016, I sent an email “Hello Timnit” to five other researchers from Black AI. “I have always been sad about the lack of color in AI,” she wrote. “But now I’ve seen 5 of you 🙂 and I thought it would be cool if we started a black group in the AI group or at least got to know each other.”
Push email discussion. What was the reason for being black that benefited their research? For Gebru, her work was largely a product of her identity; For others, it wasn’t. But after the meeting they agreed: If AI is going to play a greater role in society, they need more black researchers. Otherwise, this field will produce a weaker science – and its negative consequences may worsen.
Profit Driven Agenda
Such as black in AI It was just beginning to integrate, and AI was making its commercial strides. That year, 2016, tech giants spent an estimated $20 to $30 billion developing technology, according to the McKinsey Global Institute.
Pushed by corporate investments, the field has become warped. Thousands of other researchers started studying artificial intelligence, but mostly wanted to work on deep learning algorithms, such as those behind large language models. says Suresh Venkatasubramanian, a professor of computer science who now works in the White House Office of Science and Technology Policy. “So turn all of your research into deep learning. And then the next PhD student looks around and says, ‘Everyone is doing deep learning.’” Maybe I should do that, too.”
But deep learning is not the only method in this field. Before it flourished, there was a different approach to AI known as symbolic logic. While deep learning uses vast amounts of data to teach algorithms about purposeful relationships in information, symbolic thinking focuses on explicitly coding knowledge and reasoning based on human experience.
Some researchers now believe that these techniques should be combined. A hybrid approach would make AI more efficient in its use of data and energy, giving it the knowledge and thinking capabilities of an expert as well as the ability to update itself with new information. But companies have little incentive to explore alternative avenues when the surest way to maximize their profits is to build ever larger models.