Building Trust in Predictive Analytics: A Review of ML Explainability and Interpretability

  • Shruthi Sajid EEMCS, University of Twente, Netherlands
  • Jeewanie Jayasinghe Arachchige Department of Computer Science, Vrije University, Netherlands
  • Faiza Allah Bukhsh EEMCS, University of Twente, Netherlands
  • Abhishta Abhishta BMS, University of Twente, Netherlands
  • Faizan Ahmed EEMCS, University of Twente, Netherlands

Abstract

Purpose  – The purpose of the manuscript is to explore the previous literature to reveal the trust and interpretability of predictive analytical models that use ML /AI techniques.  

Method – The methodology applied for the study is the guidelines of Kitchenham et al. (2007).

Results – The results reveal that past research explicitly discussed the usage of predictive analytics. However, ML models are considered black boxes and suffer from transparency.  The study proposed a typical process to ensure that predictions made by AI/ML models can be interpreted and trusted.

Conclusion – The literature review conducted predictive analytics and AI/ML techniques in business decision-making, highlighting their usage in industries.  The study reveals a significant gap exists in research on the explainability and interpretability of these ML models within a business context.

Recommendations – Recommended the need for more research on transparency and interpretability of ML models by developing sector-specific explainability frameworks to bridge technical insights and business decisions.  Further, it is recommended to integrate ethical and regulatory considerations into explainability frameworks and study collaboration methods between AI/ML experts and business leaders to align ML models with business goals.

Research Implications – The research highlights the significant gap in the literature explainability and interpretability of ML and AI models in the business context.  Therefore the research stresses the need for future investigations into improving model transparency and creating industry-specific and ethical frameworks that help organizations derive more meaningful, trusted, and interpretable insights from data-driven models.

Practical Implications – It should focus on improving transparency, trust, and collaboration in using predictive analytics. By addressing explainability issues and incorporating ethical, regulatory, and industry-specific considerations, businesses can more effectively use the power of AI and ML to drive data-informed decisions.

Social Implications – This study highlights the importance of ethical and regulatory concerns related to AI and ML, such as data privacy, and fairness.

Author Biographies

Shruthi Sajid, EEMCS, University of Twente, Netherlands

Shruthi Sajid is currently working as a Business Consultant in an IT Consulting company in the Netherlands. She helps clients across various industries leverage ServiceNow capabilities to identify and combat business challenges and processes. She has a Master's degree from the University of Twente with a focus in Business and IT. Her academic background keeps her on her toes and passionate about converting data into insights using data-driven tools and languages.

Jeewanie Jayasinghe Arachchige, Department of Computer Science, Vrije University, Netherlands

Jeewanie Jayasinghe Arachchige holds a PhD in Information Systems and Management from Tilburg University in 2013. She obtained her master's degree in Information Technology in 2002 from Keele University, United Kingdom. Her main research interests are Conceptual Modeling, Ontologies, Service-oriented Designing, and Process Mining. She is a lecturer at the Department of Computer Science at Vrije University, Netherlands. Further, she was a researcher and a lecturer at the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Netherlands. She has served as a PC member of several international conferences.

Faiza Allah Bukhsh, EEMCS, University of Twente, Netherlands

Dr. Faiza Bukhsh is an Associate Professor in the Data Management and Biometrics group at the University of Twente, Netherlands. Her research covers machine learning, event-driven intelligence, and data privacy. She leads national women-in-computing initiatives and has contributed to research in privacy, machine learning, and process mining. Dr. Bukhsh has worked on various industry projects with partners like KPN, Vodafone, Philips, and ING. She completed her postdoc in cyber security at the University of Twente and earned her PhD from Tilburg University, focusing on ontological norms and process mining for compliance checking.

Abhishta Abhishta, BMS, University of Twente, Netherlands

Abhishta Abhishta is an assistant professor at the University of Twente, specializing in the economic and financial impacts of cyber-attacks. He develops data-driven techniques to assess these impacts, aiming to guide organizations in their security investments. His research has been supported by NWO grants for projects on cloud security (MASCOT) and the development of a Responsible Internet (CATRIN), as well as a project with the Dutch police on cyber-criminal collaboration. Abhishta serves on the program committees for ACM/IEEE/IFIP conferences and reviews articles for communication journals. He teaches courses on Enterprise Security and Information Systems and is part of the behavioral data science incubator at his university, applying data science to social science problems. Through his work, Abhishta contributes to enhancing digital security and reliability.

Faizan Ahmed, EEMCS, University of Twente, Netherlands

Dr. Faizan Ahmed earned his PhD in Applied Mathematics from the University of Twente and has since dedicated his research to advancing the fields of applied mathematics and machine learning. His current research is centered on applied machine learning and explainable AI, where he strives to make AI models more interpretable, transparent, and trustworthy. By developing techniques that enhance the interpretability of complex models, Dr. Ahmed aims to bridge the gap between AI's technical capabilities and its practical applications across industries.

Published
2025-01-01
How to Cite
SAJID, Shruthi et al. Building Trust in Predictive Analytics: A Review of ML Explainability and Interpretability. International Journal of Computing Sciences Research, [S.l.], v. 9, p. 3364-3391, jan. 2025. ISSN 2546-115X. Available at: <//stepacademic.net/ijcsr/article/view/677>. Date accessed: 30 mar. 2025.
Section
Articles