Technology

Evolution to a more equitable AI


The pandemic that spread around the world over the past year has shed a cold, harsh light on many things – the various levels of preparedness to respond; Group attitudes towards health, technology and science; And wide financial and social disparities. As the world continues to overcome the COVID-19 health crisis, and even some places begin to gradually return to work, school, travel and recreation, it is crucial to resolve competing priorities of equitable protection of public health while ensuring privacy.

The protracted crisis has led to a rapid change in work and social behavior, as well as increased reliance on technology. It is now more important than ever for companies, governments and society to be cautious in applying technology and handling personal information. The expanded and rapid adoption of artificial intelligence (AI) illustrates how adaptive technologies are vulnerable to intersecting humans and social institutions in ways that are potentially risky or unfair.

“Our relationship with technology as a whole will have changed dramatically after the pandemic,” says Yoav Schlesinger, director of ethical AI practices at Salesforce. There will be a negotiation process between people, companies, government and technology; How will the renegotiation of how their data flow between all these parties in a new social data contract will be negotiated?

Artificial intelligence at work

When the Covid-19 crisis began to emerge in early 2020, scientists looked to artificial intelligence to support a variety of medical uses, such as identifying potential candidate drugs for vaccines or treatments, helping to discover potential symptoms of the Covid-19 virus, and allocating scarce resources such as intensive. Beds and ventilation units. Specifically, they have drawn on the analytical power of AI-enhanced systems to develop cutting-edge vaccines and therapies.

While advanced data analysis tools can help extract insights from a massive amount of data, the result has not always been more equitable. In fact, the tools driven by AI and the data sets that work with it can perpetuate ingrained bias or systemic injustice. Throughout the pandemic, agencies such as the Centers for Disease Control and Prevention and the World Health Organization have collected massive amounts of data, but the data do not necessarily represent populations that have been disproportionately and negatively affected – including blacks, browns, and indigenous peoples. Schlesinger says some people – not some of the diagnostic advances they’ve made.

For example, biometric wearables like Fitbit or Apple Watch show promise in their ability to detect potential symptoms of covid-19, such as changes in temperature or oxygen saturation. However, these analyzes rely on often flawed or limited data sets and can lead to bias or unfairness that disproportionately affects vulnerable people and communities.

There is some research emerging Green LED Pointing to the semiconductor light source, Schlesinger says he has more difficulty reading pulse and oxygen saturation on darker skin tones. “So it may not do an equally good job of catching coronavirus symptoms for those with black and brown skin.”

Artificial intelligence has been shown to be more effective in helping to analyze large data sets. A team from the University of Southern California’s Viterbi School of Engineering has developed an artificial intelligence framework to help analyze candidate covid-19 vaccines. After identifying 26 potential candidates, narrow the field to 11 most likely successful candidates. The data source for the analysis was the Epitope Immunology database, which includes more than 600,000 determinants of infection arising from more than 3,600 species.

Other researchers from Viterbee are applying artificial intelligence to more accurately decipher cultural symbols and to better understand the social norms that guide racial and ethnic group behavior. This could have a huge impact on how a specific population is pushed through a crisis like a pandemic, due to religious ceremonies, traditions, and other social norms that can facilitate the spread of the virus.

Principal scientists Christina Lerman and Fred Moorstatter based their research on Moral Foundations Theory, Which describes the “intuitive morals” that make up the moral structures of a culture, such as care, fairness, loyalty, and authority, which help inform individual and group behavior.

“Our goal is to develop a framework that allows us to understand the dynamics that drive decision-making in a culture on a deeper level,” says Morstatter in A report issued by the USC. By doing so, we are generating more culturally informed expectations. “

The research also examines how to propagate AI in an ethical and fair way. “Most, but not all, people care about making the world a better place,” says Schlesinger. “Now we have to take it to the next level – what goals do we want to achieve, what results do we want to see? How do we measure success, and what will it look like?”

Calming moral concerns

It is critical, Schlesinger says, to question assumptions about aggregated data and AI processes. We talk about achieving justice through awareness. In each step of the process, you make valuable judgments or assumptions that will burden your results in a particular direction. “This is the primary challenge of building ethical AI, which is looking at all the places where humans are biased.”

Part of this challenge is to closely scrutinize the datasets that inform AI systems. It is essential to understand the sources of data, their composition, and answer questions such as: How is the data formed? Does it include a diverse set of stakeholders? What is the best way to publish that data in a form to reduce bias and maximize fairness?

As people return to work, employers may now be Using sensor technologies with built-in artificial intelligence, Including thermal cameras to detect high temperatures; Acoustic sensors to detect coughing or loud noises that contribute to the spread of respiratory droplets; And video streams to monitor hand washing procedures, physical spacing regulations, and mask requirements.

These monitoring and analysis systems not only face technical accuracy challenges, they also pose fundamental risks human rightsAnd the Privacy, security and trust. The drive for increased surveillance was a troubling side effect of the epidemic. Government agencies have used CCTV footage, smartphone location data, credit card purchase records, and even negative temperature scans in crowded public areas such as airports to help track the movements of people who may have contracted or been exposed to covid-19 and to demonstrate chain transmission of the virus.

“The first question that needs to be answered is not only can we do this – but should we?” Schlesinger says. “Screening individuals for their biometric data without their consent raises ethical concerns, even if it is positioned as a public good. We should have a strong conversation as a community about whether there is a good reason to apply these technologies in the first place.”

What the future looks like

With society returning to something close to normal, it is time to fundamentally reassess the relationship with data and set new standards for data collection, as well as the appropriate use – and potential for misuse – of the data. When building and deploying AI, technologists will continue to make essential assumptions about data and processes, but the foundations of that data must be questioned. Is the data legitimate? Who collected it? What are your assumptions? Is it accurately presented? How can the privacy of citizens and consumers be preserved?

As AI is more widespread, it is imperative to think about how to build trust as well. One approach is to use AI to enhance human decision-making, rather than completely replace human input.

“There will be more questions about the role that artificial intelligence should play in society, its relationship with humans, what are the appropriate tasks for humans and what are the appropriate tasks for artificial intelligence,” Schlesinger says. “There are certain areas in which AI capabilities and its ability to increase human capabilities will accelerate our confidence and dependence. In places where AI does not replace humans, but rather increases their efforts, this is the next horizon.”

There will always be situations where a person needs to participate in decision-making. “In regulated industries, for example, such as healthcare, banking and finance, there has to be a human in the loop in order to maintain compliance,” says Schlesinger. “You can’t just deploy AI to make care decisions without a doctor’s intervention. As much as we’d like to believe that AI can do that, AI doesn’t have the sympathy yet, and it probably will never.”

It is critical that the data collected and generated by AI do not exacerbate and reduce inequality. There has to be a balance between finding ways of AI to help accelerate human and social progress, promoting fair actions and responses, and simply recognizing that some problems will require human solutions.

This content was produced by Insights, the dedicated content arm of MIT Technology Review. It was not written by the editorial team at MIT Technology Review.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button