End-to-End Personalization of Digital Health Interventions using Raw Sensor Data with Deep Reinforcement Learning: A comparative study in digital health interventions for behavior change

Ali El Hassouni, Mark Hoogendoorn, Agoston E. Eiben, Martijn van Otterlo, Vesa Muhonen

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

We introduce an end-to-end reinforcement learning (RL) solution for the problem of sending personalized digital health interventions. Previous work has shown that personalized interventions can be obtained through RL using simple, discrete state information such as the recent activity performed. In reality however, such features are often not observed, but instead could be inferred from noisy, low-level sensor information obtained from mobile devices (e.g. accelerometers in mobile phones). One could first transform such raw data into discrete activities, but that could throw away important details and would require training a classifier to infer these discrete activities which would need a labeled training set.
Instead, we propose to directly learn intervention strategies for the low-level sensor data end-to-end using deep neural networks
and RL. We test our novel approach in a self-developed simulation environment which models, and generates, realistic sensor data for daily human activities and show the short-and long-term efficacy of sending personalized physical workout interventions using RL policies. We compare several different input representations and show that learning using raw sensor data is nearly as effective and much more flexible.
Original languageEnglish
Title of host publicationWI '19 - IEEE/WIC/ACM International Conference on Web Intelligence - Proceedings
Place of PublicationNew York, NY
PublisherACM
Pages258-264
Number of pages7
ISBN (Electronic)9781450369343
DOIs
Publication statusPublished - 14 Oct 2019

Fingerprint

Reinforcement learning
Health
Sensors
Mobile phones
Accelerometers
Mobile devices
Classifiers

Keywords

  • Personalization
  • Health Interventions
  • eHealth
  • mHealth
  • Sensor data
  • Advantage Actor-Critic
  • Reinforcement Learning
  • LSTM
  • GANs

Cite this

El Hassouni, Ali ; Hoogendoorn, Mark ; Eiben, Agoston E. ; van Otterlo, Martijn ; Muhonen, Vesa. / End-to-End Personalization of Digital Health Interventions using Raw Sensor Data with Deep Reinforcement Learning : A comparative study in digital health interventions for behavior change. WI '19 - IEEE/WIC/ACM International Conference on Web Intelligence - Proceedings. New York, NY : ACM, 2019. pp. 258-264
@inproceedings{5c0d7cf7e11e41ce8a0c38728ececd1d,
title = "End-to-End Personalization of Digital Health Interventions using Raw Sensor Data with Deep Reinforcement Learning: A comparative study in digital health interventions for behavior change",
abstract = "We introduce an end-to-end reinforcement learning (RL) solution for the problem of sending personalized digital health interventions. Previous work has shown that personalized interventions can be obtained through RL using simple, discrete state information such as the recent activity performed. In reality however, such features are often not observed, but instead could be inferred from noisy, low-level sensor information obtained from mobile devices (e.g. accelerometers in mobile phones). One could first transform such raw data into discrete activities, but that could throw away important details and would require training a classifier to infer these discrete activities which would need a labeled training set.Instead, we propose to directly learn intervention strategies for the low-level sensor data end-to-end using deep neural networksand RL. We test our novel approach in a self-developed simulation environment which models, and generates, realistic sensor data for daily human activities and show the short-and long-term efficacy of sending personalized physical workout interventions using RL policies. We compare several different input representations and show that learning using raw sensor data is nearly as effective and much more flexible.",
keywords = "Personalization, Health Interventions, eHealth, mHealth, Sensor data, Advantage Actor-Critic, Reinforcement Learning, LSTM, GANs",
author = "{El Hassouni}, Ali and Mark Hoogendoorn and Eiben, {Agoston E.} and {van Otterlo}, Martijn and Vesa Muhonen",
year = "2019",
month = "10",
day = "14",
doi = "10.1145/3350546.3352527",
language = "English",
pages = "258--264",
booktitle = "WI '19 - IEEE/WIC/ACM International Conference on Web Intelligence - Proceedings",
publisher = "ACM",

}

End-to-End Personalization of Digital Health Interventions using Raw Sensor Data with Deep Reinforcement Learning : A comparative study in digital health interventions for behavior change. / El Hassouni, Ali; Hoogendoorn, Mark; Eiben, Agoston E.; van Otterlo, Martijn; Muhonen, Vesa.

WI '19 - IEEE/WIC/ACM International Conference on Web Intelligence - Proceedings. New York, NY : ACM, 2019. p. 258-264.

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - End-to-End Personalization of Digital Health Interventions using Raw Sensor Data with Deep Reinforcement Learning

T2 - A comparative study in digital health interventions for behavior change

AU - El Hassouni, Ali

AU - Hoogendoorn, Mark

AU - Eiben, Agoston E.

AU - van Otterlo, Martijn

AU - Muhonen, Vesa

PY - 2019/10/14

Y1 - 2019/10/14

N2 - We introduce an end-to-end reinforcement learning (RL) solution for the problem of sending personalized digital health interventions. Previous work has shown that personalized interventions can be obtained through RL using simple, discrete state information such as the recent activity performed. In reality however, such features are often not observed, but instead could be inferred from noisy, low-level sensor information obtained from mobile devices (e.g. accelerometers in mobile phones). One could first transform such raw data into discrete activities, but that could throw away important details and would require training a classifier to infer these discrete activities which would need a labeled training set.Instead, we propose to directly learn intervention strategies for the low-level sensor data end-to-end using deep neural networksand RL. We test our novel approach in a self-developed simulation environment which models, and generates, realistic sensor data for daily human activities and show the short-and long-term efficacy of sending personalized physical workout interventions using RL policies. We compare several different input representations and show that learning using raw sensor data is nearly as effective and much more flexible.

AB - We introduce an end-to-end reinforcement learning (RL) solution for the problem of sending personalized digital health interventions. Previous work has shown that personalized interventions can be obtained through RL using simple, discrete state information such as the recent activity performed. In reality however, such features are often not observed, but instead could be inferred from noisy, low-level sensor information obtained from mobile devices (e.g. accelerometers in mobile phones). One could first transform such raw data into discrete activities, but that could throw away important details and would require training a classifier to infer these discrete activities which would need a labeled training set.Instead, we propose to directly learn intervention strategies for the low-level sensor data end-to-end using deep neural networksand RL. We test our novel approach in a self-developed simulation environment which models, and generates, realistic sensor data for daily human activities and show the short-and long-term efficacy of sending personalized physical workout interventions using RL policies. We compare several different input representations and show that learning using raw sensor data is nearly as effective and much more flexible.

KW - Personalization

KW - Health Interventions

KW - eHealth

KW - mHealth

KW - Sensor data

KW - Advantage Actor-Critic

KW - Reinforcement Learning

KW - LSTM

KW - GANs

U2 - 10.1145/3350546.3352527

DO - 10.1145/3350546.3352527

M3 - Conference contribution

SP - 258

EP - 264

BT - WI '19 - IEEE/WIC/ACM International Conference on Web Intelligence - Proceedings

PB - ACM

CY - New York, NY

ER -