Author Name: Kanwar AdhiRaj Singh Jodha Date: 28-03-2026
Machine learning methods have enabled the construction of news engagement prediction models of unprecedented predictive power, yet the psychological interpretability of these models-what they reveal about the psychological processes driving engagement-remains limited by a fundamental tension between predictive performance and explanatory transparency. This paper reviews the state of the art in ML-based news engagement prediction, evaluating models from linear regression baselines through gradient boosting machines to large language model-based content classifiers, and arguing that the field requires a paradigm shift from prediction-only objectives toward psychologically interpretable predictive models that simultaneously predict engagement and explain its psychological mechanisms. The paper reviews the feature categories that contribute most to engagement prediction: content features (emotional valence, narrative structure, topic salience, linguistic complexity), temporal features (publication time, news cycle position), social features (prior sharing counts, source credibility signals), and reader features (demographic profile, prior reading history, device context). SHAP (Shapley Additive exPlanations) analysis is evaluated as a methodology for post-hoc psychological interpretation of black-box ML models. The paper proposes the Psychologically Interpretable Engagement Model (PIEM) framework, which integrates pre-specified psychological theories as structural constraints in engagement prediction models, enabling simultaneous prediction and theory testing. Applications to individual-level adaptive content delivery and population-level engagement pattern analysis are discussed alongside their ethical implications.
Keywords: machine learning; news engagement prediction; interpretable AI; SHAP analysis; audience psychology; NLP journalism; predictive modeling; engagement features.