Our ASE 2023 Paper Won ACM SIGSOFT Distinguished Paper Award!
Dr. Ali Ghanbari and I have received the Distinguished Paper Award at the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE 2023) for our paper titled “Mutation-based Fault Localization of Deep Neural Networks.” ASE conference is a top-tier venue for research on techniques for automating software engineering practices such as software bug detection and localization.
This work applies mutation-based fault localization to deep neural networks. This research is an example of SE4AI, i.e., Software Engineering for Artificial Intelligence. It revisits mutation-based fault localization in the context of deep neural network models and proposes a novel technique, named deepmufl, applicable to a wide range of deep neural network models.
Mutation-based fault localization is based on mutation analysis, which involves systematically introducing small, controlled variations or mutations into a program’s code to observe how these alterations impact the program’s behavior. By employing this methodology within the domain of deep neural networks, Ghanbari has crafted a versatile bug detection technique, named “deepmufl.” This technique contributes to state-of-the-art by detecting more bugs from a sizable bug dataset.
The Critical Importance of Addressing Bugs in Deep Neural Networks
Like any software, though, DNNs are not immune to bugs, which can have far-reaching consequences when considering the critical roles that DNNs play in various applications. These neural networks can encounter various problems, significantly impeding their performance and functionality.
In the context of DNNs, these bugs can manifest in various ways. For example, in the case of image recognition, a DNN might misclassify objects in images, leading to errors in tasks like identifying pedestrians, vehicles, or obstacles. In natural language understanding, bugs could cause the DNN to misinterpret language, potentially resulting in miscommunications or incorrect responses. Moreover, in decision-making scenarios, DNNs might make erroneous choices based on input data, which can be detrimental in critical applications such as autonomous vehicles or medical devices.
Given the expanding role of DNNs in domains like autonomous transportation and healthcare, addressing these bugs becomes vital. Imagine the potential consequences of a self-driving car misinterpreting a traffic signal or failing to detect an obstacle on the road. Such errors could lead to accidents and put lives at risk. Similarly, in the medical field, if a diagnosis tool powered by a DNN provides incorrect information or misses critical details, it could impact patient outcomes and safety.
Finding and rectifying these bugs is paramount in scenarios where human lives and safety are at stake. Ensuring the reliability and trustworthiness of DNN-based systems is not just a matter of convenience but life and death. Consumers and society must be able to rely on these systems to perform flawlessly, especially when applied in situations where human error or system. Failure could result in catastrophic consequences. Therefore, pursuing robust and bug-free DNNs is not merely an option but an ethical and practical imperative in our increasingly interconnected and AI-driven world.