How network pruning can skew deep learning models

Computer science researchers have demonstrated that a widely used technique called neural network pruning can adversely affect the performance of deep learning models, detailed the causes of these performance issues, and demonstrated a technique to overcome the challenge.

Deep learning is a type of artificial intelligence that can be used to classify things, like images, text, or sound. For example, it can be used to identify individuals based on facial images. However, deep learning models often require a lot of computational resources to run. This causes problems when a deep learning model is put into practice for some applications.

To address these challenges, some systems engage in “neural network pruning”. This effectively makes the deep learning model more compact and therefore able to operate while using less computing resources.

“However, our research shows that this network pruning can impair the ability of deep learning models to identify certain clusters,” says Jung-Eun Kim, co-author of a paper on the work and assistant professor of computer science. to the state of North Carolina. University.

“For example, if a security system uses deep learning to scan people’s faces to determine if they have access to a building, the deep learning model should be compact so that it can operate effectively. This may work well most of the time, but network pruning could also affect the deep learning model’s ability to identify certain faces.”

In their new paper, the researchers explain why network pruning can impair model performance when identifying certain groups – what the literature calls “minority groups” – and demonstrate a new technique to address these challenges.

Two factors explain how network pruning can affect the performance of deep learning models.

In technical terms, these two factors are: the disparity in gradient norms between groups; and the disparity of Hessian standards associated with inaccuracies in a group’s data. Concretely, this means that deep learning models can become less accurate in recognizing specific categories of images, sounds or texts. Specifically, pruning the network can amplify accuracy flaws that already existed in the model.

For example, if a deep learning model is trained to recognize faces using a dataset that includes the faces of 100 white people and 60 Asian people, it might be more accurate at recognizing white faces. , but could still achieve adequate performance in recognizing Asian faces. . After network pruning, the model is more likely to be unable to recognize some Asian faces.

“The impairment may not have been noticeable in the original model, but as it is amplified by pruning the network, the impairment may become noticeable,” says Kim.

“To mitigate this issue, we demonstrated an approach that uses mathematical techniques to equalize the groups that the deep learning model uses to categorize the data samples,” Kim explains. “In other words, we use algorithms to close the precision gap between groups.”

In testing, the researchers demonstrated that using their mitigation technique improved the fairness of a deep learning model that had undergone network pruning, essentially returning it to pre-device accuracy levels. ‘pruning.

“I think the most important aspect of this work is that we now have a deeper understanding of exactly how network pruning can influence the performance of deep learning models for identifying minority groups, e.g. both theoretically and empirically,” Kim says. “We are also open to working with partners to identify unknown or overlooked impacts of model reduction techniques, especially in real-world applications for deep learning models.”

The paper, “Pruning has a disparate impact on model accuracy,” will be presented at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), taking place November 28-December 28. 9 in New Orleans. The first author of the article is Cuong Tran of Syracuse University. The article was co-authored by Ferdinando Fioretto of Syracuse and Rakshit Naidu of Carnegie Mellon University.

The work was done with support from the National Science Foundation, under grants SaTC-1945541, SaTC-2133169, and CAREER-2143706; as well as a Google Research Scholar Award and an Amazon Research Award.


Source link