Eating detection with a head-mounted video camera

In this paper, we present a computer-vision based approach to detect eating. Specifically, our goal is to develop a wearable system that is effective and robust enough to automatically detect when people eat, and for how long. We collected video from a cap-mounted camera on 10 participants for about 55 hours in free-living conditions. We evaluated performance of eating detection with four different Convolutional Neural Network (CNN) models. The best model achieved accuracy 90.9% and F1 score 78.7% for eating detection with a 1-minute resolution. We also discuss the resources needed to deploy a 3D CNN model in wearable or mobile platforms, in terms of computation, memory, and power. We believe this paper is the first work to experiment with video-based (rather than image-based) eating detection in free-living scenarios.

To see more from the Auracle research group, check out our publications on Zotero.

Bi, Shengjie and Kotz, David, “Eating detection with a head-mounted video camera” (2021). Computer Science Technical Report TR2021-1002. Dartmouth College. https://digitalcommons.dartmouth.edu/cs_tr/384