Technology

This agency wants to determine exactly how much you trust AI


Harvard Assistant Professor Himabendo Lacarago studies the role that trust plays in human decision-making in professional settings. It’s working with nearly 200 doctors in Massachusetts hospitals to understand how trusting AI can change the way doctors diagnose a patient.

For common diseases like influenza, AI is not very useful, as human professionals can identify it quite easily. But Lacarago has found that AI can help doctors diagnose hard-to-recognize diseases such as autoimmune diseases. In her latest work, Lacarago and her colleagues gave doctors records of nearly 2,000 patients and predictions from an artificial intelligence system, then asked them to predict whether a patient would have a stroke within six months. They varied the information provided about the AI ​​system, including its accuracy, confidence interval, and an explanation of how the system works. They found that doctors’ predictions were the most accurate when they got the most information about the AI ​​system.

Lacarago says she is happy to see that the National Institute of Standards and Technology is trying to measure confidence, but says the agency should consider the role that explanations might play in human confidence for AI systems. Without data to inform the decision, which means that explanation alone can lead people to trust AI a lot.

“Explanations can bring extraordinarily high confidence even when they are not warranted, and they are a recipe for problems,” she says. “But once you start putting the numbers on how well the explanation is, people’s confidence slowly deteriorates.”

Other countries are also trying to confront the issue of trust in AI. The United States is among 40 countries that have signed Principles of artificial intelligence that emphasizes trustworthiness. A document signed by about a dozen European countries says that trustworthiness and innovation go hand in hand, and they can be Considered “The two sides of the same coin.”

NIST and the OECD, a group of 38 countries with advanced economies, are working on tools to designate artificial intelligence systems as high or low risk. The Canadian government has created a file Evaluate the impact of the algorithm Operation in 2019 for companies and government agencies. There, AI falls into four categories – from not affecting people’s lives or the rights of societies to extremely high risk and perpetuation of harm to individuals and societies. The evaluation of the algorithm takes about 30 minutes. The Canadian approach requires that developers notify users of all systems except those with the lowest risk.

EU legislators are studying AI regulations This can help define global standards for what kind of AI is considered low or high risk and how the technology is regulated. Like the teacher of Europe General Data Protection Regulation Privacy law, the EU’s AI strategy could lead the world’s largest companies deploying AI to change their practices around the world.

The regulation calls for the creation of a public registry of high-risk AI forms used in a database operated by the European Commission. Examples of AI that are considered high risk included in the document include AI used in education, employment, or as safety components of utilities such as electricity, gas, or water. This report will likely be amended before it is passed, but the draft calls for a ban on artificial intelligence for social assessment of citizens by governments and real-time facial recognition.

The EU report also encourages allowing companies and researchers to experiment in areas called “sandboxes”, which are designed to ensure that the legal framework is “innovation-friendly, future-proof, and disruptive-resistant”. Earlier this month, the Biden administration inserted The National AI Research Resource Task Force aims to share government data for research on issues such as healthcare or self-driving. Final plans require congressional approval.

At the moment, the degree of trust of AI users to AI practitioners is being developed. Over time, though, the findings could enable individuals to avoid untrustworthy AI and drive the market toward deploying robust, tested, and reliable systems. Of course this is if they know AI is being used at all.


More great wired stories



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button