MehrezVM Posted August 24 Share Posted August 24 Over the past couple of decades, computer scientists have developed a wide range of deep neural networks (DNNs) designed to tackle various real-world tasks. While some of these models have proved to be highly effective, some studies found that they can be unfair, meaning that their performance may vary based on the data they were trained on and even the hardware platforms they were deployed on. For instance, some studies showed that commercially available deep learning–based tools for facial recognition were significantly better at recognizing the features of fair-skinned individuals compared to dark-skinned individuals. These observed variations in the performance of AI, in great part due to disparities in the training data available, have inspired efforts aimed at improving the fairness of existing models. Researchers at University of Notre Dame recently set out to investigate how hardware systems can contribute to the fairness of AI. Their paper, published in Nature Electronics, identifies ways in which emerging hardware designs, such as computing-in-memory (CiM) devices, can affect the fairness of DNNs. "Our paper originated from an urgent need to address fairness in AI, especially in high-stakes areas like health care, where biases can lead to significant harm," Yiyu Shi, co-author of the paper, told Tech Xplore. "While much research has focused on the fairness of algorithms, the role of hardware in influencing fairness has been largely ignored. As AI models increasingly deploy on resource-constrained devices, such as mobile and edge devices, we realized that the underlying hardware could potentially exacerbate or mitigate biases." How hardware contributes to the fairness of artificial neural networks Fairness of neural networks. a. Illustrative example of neural network fairness awareness in dermatological disease detection. b. Process of model training with fairness awareness. c. New objective to be considered in system design. Fairness awareness adds a new objective goal we are looking for, extending a new dimension of the problem. Over the past couple of decades, computer scientists have developed a wide range of deep neural networks (DNNs) designed to tackle various real-world tasks. While some of these models have proved to be highly effective, some studies found that they can be unfair, meaning that their performance may vary based on the data they were trained on and even the hardware platforms they were deployed on. For instance, some studies showed that commercially available deep learning–based tools for facial recognition were significantly better at recognizing the features of fair-skinned individuals compared to dark-skinned individuals. These observed variations in the performance of AI, in great part due to disparities in the training data available, have inspired efforts aimed at improving the fairness of existing models. Researchers at University of Notre Dame recently set out to investigate how hardware systems can contribute to the fairness of AI. Their paper, published in Nature Electronics, identifies ways in which emerging hardware designs, such as computing-in-memory (CiM) devices, can affect the fairness of DNNs. "Our paper originated from an urgent need to address fairness in AI, especially in high-stakes areas like health care, where biases can lead to significant harm," Yiyu Shi, co-author of the paper, told Tech Xplore. "While much research has focused on the fairness of algorithms, the role of hardware in influencing fairness has been largely ignored. As AI models increasingly deploy on resource-constrained devices, such as mobile and edge devices, we realized that the underlying hardware could potentially exacerbate or mitigate biases." After reviewing past literature exploring discrepancies in AI performance, Shi and his colleagues realized that the contribution of hardware design to AI fairness had not been investigated yet. The key objective of their recent study was to fill this gap, specifically examining how new CiM hardware designs affected the fairness of DNNs. "Our aim was to systematically explore these effects, particularly through the lens of emerging CiM architectures, and to propose solutions that could help ensure fair AI deployments across diverse hardware platforms," Shi explained. "We investigated the relationship between hardware and fairness by conducting a series of experiments using different hardware setups, particularly focusing on CiM architectures." As part of this recent study, Shi and his colleagues carried out two main types of experiments. The first type was aimed at exploring the impact of hardware-aware neural architecture designs varying in size and structure, on the fairness of the results attained. Source Link to comment Share on other sites More sharing options...
Recommended Posts