Skip to main content
Author(s):
Abolfazl Abdollahi, Biswajeet Pradhan
Year Published:

Cataloging Information

Topic(s):
Simulation Modeling
Risk

NRFSN number: 25682
FRAMES RCS number: 67813
Record updated:

One of the worst environmental catastrophes that endanger the Australian community is wildfire. To lessen potential fire threats, it is helpful to recognize fire occurrence patterns and identify fire susceptibility in wildfire-prone regions. The use of machine learning (ML) algorithms is acknowledged as one of the most well-known methods for addressing non-linear issues like wildfire hazards. It has always been difficult to analyze these multivariate environmental disasters because modeling can be influenced by a variety of sources of uncertainty, including the quantity and quality of training procedures and input variables. Moreover, although ML techniques show promise in this field, they are unstable for a number of reasons, including the usage of irrelevant descriptor characteristics when developing the models. Explainable AI (XAI) can assist us in acquiring insights into these constraints and, consequently, modifying the modeling approach and training data necessary. In this research, we describe how a Shapley additive explanations (SHAP) model can be utilized to interpret the results of a deep learning (DL) model that is developed for wildfire susceptibility prediction. Different contributing factors such as topographical, landcover/vegetation, and meteorological factors are fed into the model and various SHAP plots are used to identify which parameters are impacting the prediction model, their relative importance, and the reasoning behind specific decisions. The findings drawn from SHAP plots show the significant contributions made by factors such as humidity, wind speed, rainfall, elevation, slope, and normalized difference moisture index (NDMI) to the suggested model's output for wildfire susceptibility mapping. We infer that developing an explainable model would aid in comprehending the model's decision to map wildfire susceptibility, pinpoint high-contributing components in the prediction model, and consequently control fire hazards effectively.

Citation

Abdollahi, Abolfazl; Pradhan, Biswajeet. 2023. Explainable artificial intelligence (XAI) for interpreting the contributing factors feed into the wildfire susceptibility prediction model. Science of The Total Environment 879:163004.

Access this Document