Applying eXplainable AI Techniques to Interpret Machine Learning Predictive Models for the Analysis of Problematic Internet Use among Adolescents
DOI:
https://doi.org/10.5755/j02.eie.36316Keywords:
Artificial intelligence, Machine learning, medical services, AddictionAbstract
This research focusses on the potential application of artificial intelligence (AI) techniques in the analysis of behavioural addictions, specifically addressing problematic Internet use among adolescents. Using tabular data from a representative sample from Serbian high schools, the authors investigated the feasibility of employing eXplainable AI (XAI) techniques, placing special emphasis on feature selection and feature importance methods. The results indicate a successful application to tabular data, with global interpretations that effectively describe predictive models. These findings align with previous research, which confirms both relevance and accuracy. Interpretations of individual predictions reveal the impact of features, especially in cases of misclassified instances, underscoring the significance of XAI techniques in error analysis and resolution. Although AI’s influence on the medical domain is substantial, the current state of XAI techniques, although useful, is not yet advanced enough for the reliable interpretation of predictions. Nevertheless, XAI techniques play a crucial role in problem identification and the validation of AI models.
Downloads
Published
How to Cite
Issue
Section
License
The copyright for the paper in this journal is retained by the author(s) with the first publication right granted to the journal. The authors agree to the Creative Commons Attribution 4.0 (CC BY 4.0) agreement under which the paper in the Journal is licensed.
By virtue of their appearance in this open access journal, papers are free to use with proper attribution in educational and other non-commercial settings with an acknowledgement of the initial publication in the journal.