thesis

Algorithms for multi-modal human movement and behaviour monitoring

Abstract

This thesis describes investigations into improvements in the field of automated people tracking using multi-modal infrared (IR) and visible image information. The research question posed is; “To what extent can infrared image information be used to improve visible light based human tracking systems?” Automated passive tracking of human subjects is an active research area which has been approached in many ways. Typical approaches include the segmentation of the foreground, the location of humans, model initialisation and subject tracking. Sensor reliability evaluation and fusion methods are also key research areas in multi-modal systems. Shifting illumination and shadows can cause issues with visible images when attempting to extract foreground regions. Images from thermal IR cameras, which use long-wavelength infrared (LWIR) sensors, demonstrate high invariance to illumination. It is shown that thermal IR images often provide superior foreground masks using pixel level statistical extraction techniques in many scenarios. Experiments are performed to determine if cues are present at the data level that may indicate the quality of the sensor as an input. Modality specific measures are proposed as possible indicators of sensor quality (determined by foreground extraction capability). A sensor and application specific method for scene evaluation is proposed, whereby sensor quality is measured at the pixel level. A neuro-fuzzy inference system is trained using the scene quality measures to assess a series of scenes and make a modality decision

    Similar works