Machine Learning

Cross-modal Recurrent Models for Human Weight Objective Prediction from Multimodal Time-series Data

Tagged: , , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 10 months ago.


  • arXiv
    5 pts

    Cross-modal Recurrent Models for Human Weight Objective Prediction from Multimodal Time-series Data

    We analyse multimodal time-series data corresponding to weight, sleep and steps measurements, derived from a dataset spanning 15000 users, collected across a range of consumer-grade health devices by Nokia Digital Health – Withings. We focus on predicting whether a user will successfully achieve his/her weight objective. For this, we design several deep long short-term memory (LSTM) architectures, including a novel cross-modal LSTM (X-LSTM), and demonstrate their superiority over several baseline approaches. The X-LSTM improves parameter efficiency of the feature extraction by separately processing each modality, while also allowing for information flow between modalities by way of recurrent cross-connections. We derive a general hyperparameter optimisation technique for X-LSTMs, allowing us to significantly improve on the LSTM, as well as on a prior state-of-the-art cross-modal approach, using a comparable number of parameters. Finally, we visualise the X-LSTM classification models, revealing interesting potential implications about latent variables in this task.

    Cross-modal Recurrent Models for Human Weight Objective Prediction from Multimodal Time-series Data
    by Petar Veličković, Laurynas Karazija, Nicholas D. Lane, Sourav Bhattacharya, Edgar Liberis, Pietro Liò, Angela Chieh, Otmane Bellahsen, Matthieu Vegreville
    https://arxiv.org/pdf/1709.08073v1.pdf

You must be logged in to reply to this topic.