Explainable Artificial Intelligence (XAI) has primarily focused on explaining model predictions, yet a critical gap remains in explaining semantic structure discovery within knowledge graphs derived from concept maps (CMs). This study extends the OBOE (explanatiOns Based On concEpts) framework to address a fundamentally different problem, explainable domain discovery in knowledge graphs (KGs) classification, moving beyond supervised classification to unsupervised structural explanation. Our approach integrates Knowledge Graph Embeddings (KGEs), clustering algorithms, and Large Language Models (LLMs) in a novel triple role-generating structural explanations, verifying hallucinations, and enabling large-scale evaluation. Concept-relation-concept triples are embedded through KGEs and clustered using hierarchical and spectral methods to reveal semantic domains, with QualIT-inspired LLM prompting via Chain-of-Thought reasoning. Evaluation across three corpora (Amazon, BBC News, and Reuters) demonstrated robust classification with mean per-class errors of 0.1, 0.147, and 0.142, and LogLoss values of 0.236, 0.342, and 0.395, discovering 92 semantic domains across 17 topics. Hierarchical clustering achieved superior performance (mean 3.78/5) with higher relevance, while spectral clustering offered better coverage (3.51/5) through more compact structures. By bridging traditional clustering with LLM-based explanation and evaluation, this work establishes a new XAI paradigm for knowledge organization contexts where understanding semantic graph structure is as critical as classification accuracy.
Explainable Artificial Intelligence (XAI) has primarily focused on explaining model predictions, yet a critical gap remains in explaining semantic structure discovery within knowledge graphs derived from concept maps (CMs). This study extends the OBOE (explanatiOns Based On concEpts) framework to address a fundamentally different problem, explainable domain discovery in knowledge graphs (KGs) classification, moving beyond supervised classification to unsupervised structural explanation. Our approach integrates Knowledge Graph Embeddings (KGEs), clustering algorithms, and Large Language Models (LLMs) in a novel triple role-generating structural explanations, verifying hallucinations, and enabling large-scale evaluation. Concept-relation-concept triples are embedded through KGEs and clustered using hierarchical and spectral methods to reveal semantic domains, with QualIT-inspired LLM prompting via Chain-of-Thought reasoning. Evaluation across three corpora (Amazon, BBC News, and Reuters) demonstrated robust classification with mean per-class errors of 0.1, 0.147, and 0.142, and LogLoss values of 0.236, 0.342, and 0.395, discovering 92 semantic domains across 17 topics. Hierarchical clustering achieved superior performance (mean 3.78/5) with higher relevance, while spectral clustering offered better coverage (3.51/5) through more compact structures. By bridging traditional clustering with LLM-based explanation and evaluation, this work establishes a new XAI paradigm for knowledge organization contexts where understanding semantic graph structure is as critical as classification accuracy. Read More


