The World Health Organization pointed out an increasing percentage of obese people in the past decades. It is the reason for many cases of cardiovascular diseases. For proper diagnosis, the food intake behaviour of the patients has to be monitored. We developed an approach to automatically analyze food intake using chewing and swallowing sounds. Two microphones of a sensor system in the outer ear canal and above the ear record these sounds. A central signal processing element is the food intake activity detection. It detects periods of eating and drinking activity and rejects sounds not related with food intake. An artificial neural network in feed-forward structure is used as main part of the food intake activity detection algorithm. It is trained on a small data set containing chewing and swallowing sounds, speech and environmental sounds. Input features were computed from time and frequency domain. The features were computed for the signals of the in-ear microphone and the reference microphone, respectively. A data set of food intake sounds and various environmental sounds of everyday life is used as test set for evaluation. We compared the results with those of a simple detection algorithm based on signal energy ratio. While the simple detection algorithm failed in many cases of environmental sounds, the artificial neural network classifier could strongly reduce the number of false positive detections. It provides a basic tool to analyze temporal patterns of human food intake behaviour. In cases of food intake monitoring, the algorithm can be used to trigger food classification and bite weight estimation at periods of real food intake
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.