Smiley face
Just as the IBM Selectric shown here added a translation layer between the typist at the keyboard and the words that emerged on the page, explainable AI holds the promise of detailing why the AI make the recommendation it did, to the satisfaction of auditors.

Artificial intelligence could advance science in dramatic ways, but beyond the technical challenges, one of the hurdles is cultural. We must trust the systems we build.

“Making AI trustworthy takes a lot of commitment, and it’s a long journey, and we are just in the beginning of that journey,” says Pin-Yu Chen, Research Staff Member at the IBM Thomas J. Watson Research CenterChen and his colleagues are working on ways to ensure that AI is trustworthy using four key components: fairness, explainability, robustness, and accountability.

“These four key components of trustworthy AI are very crucial for AI to automate the discovery process,” says Payel DasResearch Staff Scientist and Manager, IBM Thomas J. Watson Research Center. “The end product of this discovery process—if it’s AI-driven—has to be trusted by the human, the society.”

Chen and Das both refine AI systems at IBM Research; they design AI that is robust to adversarial threats, generalizes well to different scenarios, and automates the scientific discovery process. They believe that part of addressing these challenges is sharing IBM’s work on fairness, explainability, robustness, and accountability. On behalf of AI Trends, Kaitlyn Barago spoke with them about the foundations of trustworthiness, how open science contributes, and the future of AI.

Editor’s note: Chen and Das will present their work at the upcoming AI World Conference & Expo in Boston, October 23-25. With fellow IBM researcher Prasanna Sattigeri, they’ll present in the Making AI Trustworthy Seminar. Chen will also present in a track on Cutting Edge AI Research. Their conversation has been edited for length and clarity.

AI Trends: Thank you both for joining me. Let’s start with how do you define trustworthy AI?

Payel Das: No one wants to believe the black box model. Additionally, AI models should be, in principle, aligned with human-centric values. Therefore, trustworthy AI is AI models that are fair, robust, explainable, and accountable.

Pin-Yu Chen: There are multiple dimensions for trust, and currently we have four pillars, as Payel described: fairness, robustness, explainability, and lineage. I think trust is something very special that evolves over time based on what solutions we are offering to the enterprise.

How does the idea of open source data and libraries help with this goal of creating trustworthy AI?

PD: An AI model is as good as the data it is trained on is. Therefore, open source data that is unbiased and balanced is crucial to building trusted AI. Same with open source libraries. The open source libraries are a key component to ensure standardization and reproducibility of AI models. These are the two challenges that the community often says today whenever you talk about incorporating AI models in any solution that people and society can trust. So if we have a standardized open source library that can guarantee the dimensions of trust that we already mentioned, they are taken into account, for sure. So that makes the path easy and more standardized.

PC: I’m very happy to see IBM Research is communicated to open source research assets. It is our belief that by open-sourcing all the research assets we have, everyone can benefit from in the community. I think this is very important, to make AI systems trustworthy. IBM recently announced that we joined Linux Foundation AI to add advance trust with AI. This another big step and commitment we have been showing in how we make AI transparent and responsible.

What are some of the challenges that you see in making AI trustworthy?

PC: Making AI trustworthy takes a lot of commitment, and it’s a long journey, and we are just in the beginning of that journey. I think there are several things we need to do in order to overcome the challenges. One big challenge I think we are doing well in is to first make sure researchers and users are aware of these challenges. In the early phases of AI, people only cared about fulfillment and not so much about trustworthiness. But in recent years we have seen more and more enterprises using AI solutions, and they’ve become aware of the importance of infusing trust into these AI solutions, that again include the fairness, explainability, robustness, and so on.

And the other challenge is about AI technology itself. APayel mentioned, AI is indeed a black box technology, where we give the AI model data and it learns by itself how to recognize and make decisions, based on the data and the model we give to him. It is a little bit automated in nature. The challenge, in terms of technology, is how and what it learns for decision-making, and how do we translate thatthe decision-making processes—for humans so we [make the AI] trustworthy.

PD: The definition of trust comes from humans, not from a machine. In order to incorporate the several different dimensions of trust, the builders or the developers or the researchers who are working on these AI models, have to learn the dimensions of trust. The researchers or designers have the power to build trustworthy AI models by believing in those dimensions of trust and practicing them on a daily basis. And as you might know, at IBM Research we practice several dimensions of trust. For example, fairness is a key practice in our daily lives at IBM. So we are familiar with these different dimensions of trust. And working with AI, it makes it “easier” to ensure that the AI models that we are making incorporate some of that dimension of trust, if not all.

What are some of the ways that IBM is addressing these challenges of making AI fair, making AI explainable, robust, and accountable, among other things?

PD: One key practice we have been doing in recent years is launching open source toolkits so that not just IBM or its clients can benefit from the trustworthy AI, but the whole community can benefit. The notion of trustworthy AI can go beyond IBM Research; it can influence all practitioners in a community. Recently we have launched toolkits such as ART (Adversarial Robustness Toolbox), AI Explainability 360, which incorporates explainability in AI, and AI Fairness 360, which incorporates or talks about fairness in AI. Each of them addresses an existing challenge in trustworthy AI.

For example, the recently launched AI Explainability 360 is designed to translate algorithmic research from labs into the actual practice in many domains like finance, human capital management, healthcare, and education. It has eight different state-of-the-art algorithms for interpretable machine-learning, as well as different explainability metrics in it. The AI Fairness 360 toolkit has more than 70 different metrics of fairness; showing how we take care of the broadness and the multifaceted nature of fairness in AI.

PC: For ART, the Adversarial Robustness Toolbox, it’s a very complex, comprehensive toolkit to make sure AIs are robust to malicious attempts or malicious manipulation in the lifecycle of AI. At different phases this AI model is potentially vulnerable to adversarial attacks, like when you train your model, or when you deploy your model as a service. ART is a very nice toolbox that includes a set of attacks to help evaluate your robustness, a set of defenses that help you improve your robustness, and a set of evaluation tools to provide you some quantitative measures of how robust your model is.

PD: We are also working on a concept of AI factsheets. The idea of a factsheet is to provide additional level of score or information for the AI model, that every AI model will have a factsheet on its own which will provide the information about the product’s important characteristics. So that ensures that developers or the scientists who are making these AI models know all aspects of it, but also the end-user will know every defined dimension that comes with this AI model.

Where do you see the greatest potential for AI to change our current discovery process in science?

PD: If you think about the current way of a human or a society achieving a scientific discovery, it is run by a trialanderror method. It’s highly cost and time consuming. AI can be used to automate, accelerate, and enable new scientific discoveries in many areas such as healthcare, climate science, high-energy physics, and material science. One important area of discovery we are working on at IBM Research is design of molecules and materials given IBM’s long history of research in physical sciencematerials science and mathematical science.

In the discovery process of molecules and materials, the goal of AI algorithm is to discover a new molecule or a material with a desired property, such as a drug for treating rare cancer, or maybe a material capable of better energy storage and conversion. At IBM Research, we are addressing these challenges, and we’ll discuss some of them in the AI World event.

PC: I totally agree with Payel. I think a lot of excitement going on in the space of AI in scientific discovery is really to accelerate the process of scientific discovery, and we somehow reduce the time to doing this trial and error so we can boost the discovery process. That’s a very exciting fact.

What do you think it will take to get there? Where do you see the future of AI going?

PD: The four components that both Pin-Yu and I mentioned beforefairness, explainability, robustness, and accountabilitythese four key components of trustworthy AI are very crucial for AI to automate the discovery process. Because again, the end product of this discovery process, if it’s AI-driven, has to be trusted by the human, the society. So it is extremely important that this discovery, maybe it’s a drug, or a material, or a diagnosis of a disease, it is robust, it is explainable, and it is fair. The critical challenge for trustworthy AI is the data, and the data is crucial in order for it to be fair.

An AI model should also be as smart and innovative as a human scientist. Therefore, abilities to learn from different domains, digest that knowledge, and be creative on top of that, are highly crucial for an AI model to be at the level of a Nobel Prize-winning scientist who can do a world-changing discovery.

Whereas all the aspects of trustworthy AI that we mentioned earlier are important for AI to be capable of a scientific discovery, learning from different domains as well as being creative is really important for AI to be like a human scientist. Or at least augment a human scientist and make him or her capable of achieving a discovery in less time and with less effort.

PC: I agree. There is indeed not a single definition for each pillar that we talk about for trustworthy AIs. For example, for fairness we have 70 different fairness metrics. For explainability, we have things like global explainability versus local explainability. For robustness we have different definitions of robustness for different data types, or different AI models. This research for making AIs trustworthy is very dynamic, and it’s evolving in some sense based on the demands.

In addition to making AI trustworthy, we also want to make sure we convey the right message to the general audience so they can set the right expectations of what it means to make AI trustworthy and robust. Everybody’s talking about AI, but not so many end-users actually know what AI’s doing and how it approaches the decision-making process. That’s why we believe in conveying the message to the outside world and sharing our research development is very important.

For more information, go to AI World Conference & Expo.

Source: AI Trends

%d bloggers like this: