Topic modeling for conversations for mental health helplines with utterance embedding

Salim Salmi*, Rob van der Mei, Saskia Mérelle, Sandjai Bhulai

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Conversations with topics that are locally contextual often produces incoherent topic modeling results using standard methods. Splitting a conversation into its individual utterances makes it possible to avoid this problem. However, with increased data sparsity, different methods need to be considered. Baseline bag-of-word topic modeling methods for regular and short-text, as well as topic modeling methods using transformer-based sentence embeddings were implemented. These models were evaluated on topic coherence and word embedding similarity. Each method was trained using single utterances, segments of the conversation, and on the full conversation. The results showed that utterance-level and segment-level data combined with sentence embedding methods performs better compared to other non-sentence embedding methods or conversation-level data. Among the sentence embedding methods, clustering using HDBScan showed the best performance. We suspect that ignoring noisy utterances is the reason for better topic coherence and a relatively large improvement in topic word similarity.

Original languageEnglish
Article number100126
Pages (from-to)1-7
Number of pages7
JournalTelematics and Informatics Reports
Volume13
Early online date27 Feb 2024
DOIs
Publication statusPublished - Mar 2024

Bibliographical note

Publisher Copyright:
© 2024 The Authors

Keywords

  • Bert
  • Conversations
  • Mental health
  • Sentence embedding
  • Topic modeling

Fingerprint

Dive into the research topics of 'Topic modeling for conversations for mental health helplines with utterance embedding'. Together they form a unique fingerprint.

Cite this