Development of a System for Contextualized Detection of Sentiments and Moral Values in Conversation Texts Using Large Language Models and Social Graphs

Bookmark (0)
Please login to bookmark Close

Understanding language on social media requires more than processing isolated text, it demands attention to the social structures in which that text is embedded. User relationships, influence interactions, and network dynamics significantly affect how sentiment and moral values are communicated. However, most Natural Language Processing (NLP) approaches treat each message in isolation, ignoring this social context. In this work, a unified framework thatmakes language models ‘socially aware’ is present by integrating user level embeddings derived from social interaction graphs. Specifically, undirected graphs and learn node representations are constructed via scalable methods, and these social embeddings are fused with the language model text representation so that it can draw on both linguistic and social threads as it learns. It is also introduce a novel moral annotation pipeline based on Moral Foundations Theory (MFT) to create gold-standard labels. Applied to sentiment and moral value detection, our approach yields substantial gains in sentiment tasks and modest improvements in the inherently more challenging moral value prediction. Comparative experiments under low resource conditions reveal that encoder-only architectures retain their edge in moral reasoning,while decoder-only models can catch up when enriched with social embeddings. Furthermore,this work provides a flexible framework to other NLP tasks, enhancing language comprehension by incorporating additional contextual information beyond text.

​Understanding language on social media requires more than processing isolated text, it demands attention to the social structures in which that text is embedded. User relationships, influence interactions, and network dynamics significantly affect how sentiment and moral values are communicated. However, most Natural Language Processing (NLP) approaches treat each message in isolation, ignoring this social context. In this work, a unified framework thatmakes language models ‘socially aware’ is present by integrating user level embeddings derived from social interaction graphs. Specifically, undirected graphs and learn node representations are constructed via scalable methods, and these social embeddings are fused with the language model text representation so that it can draw on both linguistic and social threads as it learns. It is also introduce a novel moral annotation pipeline based on Moral Foundations Theory (MFT) to create gold-standard labels. Applied to sentiment and moral value detection, our approach yields substantial gains in sentiment tasks and modest improvements in the inherently more challenging moral value prediction. Comparative experiments under low resource conditions reveal that encoder-only architectures retain their edge in moral reasoning,while decoder-only models can catch up when enriched with social embeddings. Furthermore,this work provides a flexible framework to other NLP tasks, enhancing language comprehension by incorporating additional contextual information beyond text. Read More