Cracking the Code: Enhancing Trust in AI through Explainable Models

Authors

  • Vipin Gupta Shailendra Shukla Kumari Nikita

DOI:

https://doi.org/10.48047/resmil.v10i1.20

Keywords:

Explainable AI (XAI) , Trust in AIBlack-Box Models ,Model Interpretability , Transparency in AIAI Trustworthiness , XAI Techniques , LIME (Local Interpretable Model-agnostic Explanations) , SHAP (SHapley Additive ex Planations) , Attention Mechanisms

Abstract

In this paper, we explore the critical challenges of building trust in artificial intelligence (AI) systems, particularly those characterized by black box models. The proliferation of complex and opaque AI models has raised concerns about a lack of interpretability, hindering users’ understanding and confidence in these systems Significant problem solved in this review addresses the importance of increasing the reliability of AI through semantic AI (XAI) approaches . clarify the complexity of the model

To address this issue, our approach is a comprehensive review of the existing literature on XAI, black-box models, and their implications for reliability We thoroughly analyze various XAI methods, such as local interpretive model-agnostic explanations (LIME), SHapley explanatory agnostic explanations (SHAP). and reflection methods, in addition to clarifying their efforts aimed at making AI models transparent, we examine real-world case studies in which the use of XAI has enhanced trustworthiness of AI systems have improved in various sectors.

The main findings of our study highlight the important role of XAI in reducing the uncertainty associated with black-box models. We highlight examples where the adoption of interpretable approaches not only increased the interpretability of AI systems but also enhanced user confidence. By providing transparent insights into decision-making processes, XAI is proving to help remove complex models and establish a foundation of trust between users and AI systems

The implications of our research apply to a range of industries that rely on AI, including healthcare, finance and autonomous systems. While opening up the benefits of XAI for building trust, we recommend its inclusion in AI development practices and highlight possible future developments in this area. However, our study acknowledges the existing challenges and limitations of current XAI techniques, and further research is needed to refine and expand the applicability of translational techniques.

In conclusion, this study highlights the critical importance of addressing trust issues in AI through a semantic lens. By opening up complex black box models, we contribute valuable insights to the ongoing discourse on credible AI, and pave the way for widespread adoption and deployment of AI frameworks across industries in.

Downloads

Published

2020-04-30

How to Cite

Vipin Gupta Shailendra Shukla Kumari Nikita. (2020). Cracking the Code: Enhancing Trust in AI through Explainable Models. RES MILITARIS, 10(1), 166–171. https://doi.org/10.48047/resmil.v10i1.20

Issue

Section

Articles