Does this artificial intelligence think like a human?

In machine learning, it is important to understand why a model makes specific decisions, along with whether those decisions are correct or not. For example, a machine-learning model can accurately diagnose skin lesion cancer, but it may have done so using a blip that has nothing to do with the clinical photo.

Although there are tools that can help professionals understand the logic of a model, often these methods only provide insights into one decision at a time and each must be evaluated manually. Models are usually trained using millions of data inputs, making it almost impossible for a human to make enough decisions to identify patterns.

Now, researchers at MIT and IBM Research have developed a methodology that allows the user to integrate, sort, and rank these personal explanations to rapidly analyze machine-learning model behavior. Their technique, called shared interest, involves measurable measurements to compare how well the model’s logic fits the human.

Shared interest helps the user to easily uncover trends in the decision-making of the model – for example, the model is often confused by distracting, irrelevant features such as background objects in the photos. Integrating these insights will enable the user to quickly and quantitatively determine whether the model is reliable and ready to be implemented in a real-world situation.

“In fostering collaborative interest, our goal is to be able to scale this analytical process so that you can more globally understand what your model behavior is,” said Angie Boggust, a graduate student lead author at Visualization Group. Computer Science and Artificial Intelligence Laboratory (CSAIL).

Boggust co-authored the paper with his advisor Arvind Satyanarayan, an assistant professor of computer science who heads the Visualization Group, as well as Benjamin Hoover and Hendrik Strobelt, a senior author at IBM Research. The paper will be presented at a conference on human factors in computing systems.

She began working on this project during a summer internship at IBM under the guidance of Boggust Strobelt. After returning to MIT, Boggust and Satyanarayan expanded the project and continued to collaborate with Strobelt and Hoover, helping to implement case studies showing how they could use the technique in practice.

Human-AI setting

Shared interest affects popular technologies that show how the machine-learning model makes a particular decision, called salvation methods. If the model is categorizing images, salinity methods highlight areas of the image that are important to the model when making its decision. These areas are visualized as a kind of heatmap, called a salinity map, which is often covered over the original image. If the model classifies the image as a dog and the dog’s head is highlighted, it means that those pixels are important to the model when it is determined that the image contains a dog.

Shared interest works by comparing soliloquy methods with ground-truth data. In the Image Dataset, ground-truth data are generally human-generated annotations surrounding the relevant parts of each image. In the previous example, the box is all around the dog in the photo. When evaluating an image classification model, compare shared interest model-generated salinity data with man-made ground-truth data for the same image to see how well they are aligned.

The technique uses several criteria to calculate that setting (or misalignment) and sorts the specific decision into one of eight categories. Categories are completely human-aligned (the model makes a correct estimate and the highlighted area on the salinity map is the same as the man-made box) completely distracted (the model misjudges and does not use any image. Features found in the human-product box).

“At one end of the spectrum, your model made the decision for the same reason that humans did, and at the other end of the spectrum, your model and man made this decision for completely different reasons. By counting all the images in your dataset, you can use that size to sort through them,” Bogast explained.

The technique works similarly to text-based data, where keywords are highlighted instead of image regions.

Rapid analysis

The researchers used three case studies to show how shared interest can benefit non-experts and machine-learning researchers.

In the first case study, they used shared interest to help determine whether a dermatologist would trust a machine-learning model designed to help diagnose cancer from photos of skin lesions. Enabled the dermatologist to quickly see examples of right and wrong assessments of the shared interest model. In the end, the dermatologist decided not to trust the model because it made too many predictions based on the pictorial artifacts rather than the actual lesions.

“The value here is that using shared interest, we can see these patterns emerging in our model behavior. In about half an hour, the dermatologist was able to make a confident decision on whether to trust the model and implement it, “said Bogast.

In the second case study, they worked with a machine-learning researcher to show how a shared interest specific salinity method can be assessed by exposing previously unknown pitfalls in the model. Their technology has allowed the researcher to analyze thousands of right and wrong decisions in a timely manner through simple manual methods.

In the third case study, they used shared interest to dive deeper into a specific image classification example. By changing the ground-truth area of ​​the image, they were able to perform a what-if analysis to see which image features were most important to specific estimates.

The researchers were impressed with how well shared interest worked in these case studies, but Bogast warned that this technique was only as good as the salinity methods on which it was based. If those practices are biased or inaccurate, shared interest will inherit those limitations.

In the future, researchers will want to apply shared interest to a variety of data, especially table data used in medical records. They also want to use shared interest to help improve current salinity practices. Bogast hopes that this research will inspire more work to try to quantify machine-learning model behavior in ways that are understandable to humans.

The work was partly funded by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Reprinted with permission from MIT News. Read the original article.

Leave a Comment