23,719 research outputs found
Human mobility monitoring in very low resolution visual sensor network
This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics
CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping
With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%
A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition
The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum
produces images that have improved face recognition performance under varying lighting conditions.
This is because long-wave infrared images are the result of emitted, rather than reflected,
light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images
have also improved face recognition under varying pose and lighting. The opacity of glass to
long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces
the recognition performance.
This thesis presents the design and performance evaluation of a novel camera system which is
capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth
video images via a common optical path requiring no spatial registration between sensors beyond
scaling for differences in sensor sizes. Experiments using a range of established face recognition
methods and multi-class SVM classifiers show that the fused output from our camera system not
only outperforms the single modality images for face recognition, but that the adaptive fusion
methods used produce consistent increases in recognition accuracy under varying pose, lighting
and with the presence of eyeglasses
The passive operating mode of the linear optical gesture sensor
The study evaluates the influence of natural light conditions on the
effectiveness of the linear optical gesture sensor, working in the presence of
ambient light only (passive mode). The orientations of the device in reference
to the light source were modified in order to verify the sensitivity of the
sensor. A criterion for the differentiation between two states: "possible
gesture" and "no gesture" was proposed. Additionally, different light
conditions and possible features were investigated, relevant for the decision
of switching between the passive and active modes of the device. The criterion
was evaluated based on the specificity and sensitivity analysis of the binary
ambient light condition classifier. The elaborated classifier predicts ambient
light conditions with the accuracy of 85.15%. Understanding the light
conditions, the hand pose can be detected. The achieved accuracy of the hand
poses classifier trained on the data obtained in the passive mode in favorable
light conditions was 98.76%. It was also shown that the passive operating mode
of the linear gesture sensor reduces the total energy consumption by 93.34%,
resulting in 0.132 mA. It was concluded that optical linear sensor could be
efficiently used in various lighting conditions.Comment: 10 pages, 14 figure
Interoperable services based on activity monitoring in ambient assisted living environments
Ambient Assisted Living (AAL) is considered as the main technological solution that will enable the aged and people in recovery to maintain their independence and a consequent high quality of life for a longer period of time than would otherwise be the case. This goal is achieved by monitoring human’s activities and deploying the appropriate collection of services to set environmental features and satisfy user preferences in a given context. However, both human monitoring and services deployment are particularly hard to accomplish due to the uncertainty and ambiguity characterising human actions, and heterogeneity of hardware devices composed in an AAL system. This research addresses both the aforementioned challenges by introducing 1) an innovative system, based on Self Organising Feature Map (SOFM), for automatically classifying the resting location of a moving object in an indoor environment and 2) a strategy able to generate context-aware based Fuzzy Markup Language (FML) services in order to maximize the users’ comfort and hardware interoperability level. The overall system runs on a distributed embedded platform with a specialised ceiling- mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels, to detect specific events such as potential falls and to deploy the right sequence of fuzzy services modelled through FML for supporting people in that particular context. Experimental results show less than 20% classification error in monitoring human activities and providing the right set of services, showing the robustness of our approach over others in literature with minimal power consumption
- …