classified studies on XAI into two approaches: a minority of works that focuses on creating inherently explainable models, and the majority that wraps black-box models with a layer of explainability, the so-called post hoc models. There are two main approaches to explain deep neural networks (DNNs): i) making parts of a DNN transparent-sensitivity analysis and layer-wise relevance propagation (LRP) are well-known methods, with superior performance for LRP to identify the most relevant pixels ii) learning semantic graphs called explanatory graphs from existing DNNs, which aim to extract the knowledge learned by a DNN and model it as an explainable graph, as proposed by Zhang et al. Thus, XAI could be the key to designing solutions that leverage the power of ML while protecting privacy. Researchers suggest that AI explainability could help in selecting ideal anonymization techniques for ML algorithms, as comprehending the ML decisions would aid in understanding and estimating bias. However, several standard anonymization techniques distort the predictions of ML algorithms. To comply with the GDPR, researchers resort to anonymizing data they use. They argued that XAI could be used to select the right data anonymization techniques so that privacy is protected while the ML results remain viable. highlighted the intersection between security and explainable ML. claimed that XAI is essential for i) professionals (e.g., doctors) using AI systems to understand the decisions made, ii) end users (e.g., patients) who are affected by an AI decision-there are legal regulations that codify this need, such as the General Data Protection Regulation (GDPR), and iii) developers to improve AI algorithms by accurately identifying their strengths and weaknesses. The role of XAI is to enhance explainability while maintaining high performance. Notably, deep learning (DL) models, which are arguably the most robust and complex type of AI algorithms, are also the most difficult to explain. DARPA’s XAI program highlighted that a machine learning (ML) model’s explainability is inversely proportional to its prediction performance (e.g., accuracy). Researchers have discussed the need to provide explanations for their models regarding several practical, ethical, and operational considerations. Therefore, researchers have been exploring explainable AI (XAI), which is a paradigm that targets AI models and aims to provide explanations for their predictions. For example, “Why did I not get a loan?” and “Why does this X-ray picture say I have cancer?” are compelling questions that the research community must be able to answer. As researchers focus on improving model performance, the ability to explain the reasoning behind a model’s predictions becomes increasingly crucial. Interest in AI-related research has been growing exponentially, partly motivated by its outstanding performance in computer vision (CV) and speech recognition. The goal of this field has been to develop general intelligence in AI. We also formulate open questions that are either unanswered or insufficiently addressed in the literature, and discuss future directions of research.Īrtificial intelligence (AI) is a paradigm for simulating human reasoning, e.g., classifying previously unobserved data, predicting future events such as stock market trends, forecasting sales and consumer behavior. We characterize the security of XAI with several security properties that have been discussed in the literature. Particularly, we investigate the existing literature from two perspectives: the applications of XAI to cybersecurity (e.g., intrusion detection, malware classification), and the security of XAI (e.g., attacks on XAI pipelines, potential countermeasures). We conduct an extensive literature review on the intersection between XAI and cybersecurity. It is particularly relevant in cybersecurity domain, in that XAI may allow security operators, who are overwhelmed with tens of thousands of security alerts per day (most of which are false positives), to better assess the potential threats and reduce alert fatigue. Consequently, explainable AI (XAI) has emerged as a paramount topic addressing the related challenges of making AI models explainable or interpretable to human users. However, sharing the same fundamental limitation with other DL application domains, such as computer vision (CV) and natural language processing (NLP), AI-based cybersecurity solutions are incapable of justifying the results (ranging from detection and prediction to reasoning and decision-making) and making them understandable to humans. With the extensive application of deep learning (DL) algorithms in recent years, e.g., for detecting Android malware or vulnerable source code, artificial intelligence (AI) and machine learning (ML) are increasingly becoming essential in the development of cybersecurity solutions.
0 Comments
|