Abstract: Artificial Intelligence (AI) is transforming decision-making in critical fields like healthcare, finance, and governance. However, its "black box" nature undermines trust and comprehension. Explainable AI (XAI) addresses this by enhancing transparency and interpretability, yet aligning explainability with human cognitive and emotional needs remains challenging. This paper explores principles and methodologies for designing human-centered XAI, emphasizing user profiling, dynamic explanations, and ethical considerations like fairness and accountability. Key contributions include adaptive explanations tailored to diverse user needs and strategies to mitigate biases, advancing AI systems that are transparent, accessible, and trustworthy.

Keywords: Artificial Intelligence (AI), Explainable AI (XAI), Human-centered design, Dimensions of trust in AI.


PDF | DOI: 10.17148/IARJSET.2025.12110

Open chat