10,441 research outputs found

    Fast, collaborative acquisition of multi-view face images using a camera network and its impact on real-time human identification

    Get PDF
    Biometric systems have been typically designed to operate under controlled environments based on previously acquired photographs and videos. But recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans in real-time under unconstrained environments. Distributed camera networks are appropriate for unconstrained scenarios because they can provide multiple views of a scene, thus offering tolerance against variable pose of a human subject and possible occlusions. In dynamic environments, the face images are continually arriving at the base station with different quality, pose and resolution. Designing a fusion strategy poses significant challenges. Such a scenario demands that only the relevant information is processed and the verdict (match / no match) regarding a particular subject is quickly (yet accurately) released so that more number of subjects in the scene can be evaluated.;To address these, we designed a wireless data acquisition system that is capable of acquiring multi-view faces accurately and at a rapid rate. The idea of epipolar geometry is exploited to get high multi-view face detection rates. Face images are labeled to their corresponding poses and are transmitted to the base station. To evaluate the impact of face images acquired using our real-time face image acquisition system on the overall recognition accuracy, we interface it with a face matching subsystem and thus create a prototype real-time multi-view face recognition system. For front face matching, we use the commercial PittPatt software. For non-frontal matching, we use a Local binary Pattern based classifier. Matching scores obtained from both frontal and non-frontal face images are fused for final classification. Our results show significant improvement in recognition accuracy, especially when the front face images are of low resolution

    Real-time acquisition of multi-view face images to support robust face recognition using a wireless camera network

    Get PDF
    Recent terror attacks, intrusion attempts and criminal activities have necessitated a transition to modern biometric systems that are capable of identifying suspects in real time. But real-time biometrics is challenging given the computationally intensive nature of video processing and the potential occlusions and variations in pose of a subject in an unconstrained environment. The objective of this dissertation is to utilize the robustness and parallel computational abilities of a distributed camera network for fast and robust face recognition.;In order to support face recognition using a camera network, a collaborative middle-ware service is designed that enables the rapid extraction of multi-view face images of multiple subjects moving through a region. This service exploits the epipolar geometry between cameras to speed up multi view face detection rates. By quickly detecting face images within the network, labeling the pose of each face image, filtering them based on their suitability of recognition and transmitting only the resultant images to a base station for recognition, both the required network bandwidth and centralized processing overhead are reduced. The performance of the face image acquisition system is evaluated using an embedded camera network that is deployed in indoor environments that mimic walkways in public places. The relevance of the acquired images for recognition is evaluated by using a commercial software for matching acquired probe images. The experimental results demonstrate significant improvement in face recognition system performance over traditional systems as well as increase in multi-view face detection rate over purely image processing based approaches

    An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display

    Get PDF
    We present a tele-immersive system that enables people to interact with each other in a virtual world using body gestures in addition to verbal communication. Beyond the obvious applications, including general online conversations and gaming, we hypothesize that our proposed system would be particularly beneficial to education by offering rich visual contents and interactivity. One distinct feature is the integration of egocentric pose recognition that allows participants to use their gestures to demonstrate and manipulate virtual objects simultaneously. This functionality enables the instructor to ef- fectively and efficiently explain and illustrate complex concepts or sophisticated problems in an intuitive manner. The highly interactive and flexible environment can capture and sustain more student attention than the traditional classroom setting and, thus, delivers a compelling experience to the students. Our main focus here is to investigate possible solutions for the system design and implementation and devise strategies for fast, efficient computation suitable for visual data processing and network transmission. We describe the technique and experiments in details and provide quantitative performance results, demonstrating our system can be run comfortably and reliably for different application scenarios. Our preliminary results are promising and demonstrate the potential for more compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201

    Emerging technologies for learning report (volume 3)

    Get PDF

    Real-time transmission of panoramic images for a telepresence wheelchair

    Full text link
    © 2015 IEEE. This paper proposes an approach to transmit panoramic images in real-time for a telepresence wheelchair. The system can provide remote monitoring and assistive assistance for people with disabilities. This study exploits technological advancement in image processing, wireless communication networks, and healthcare systems. High resolution panoramic images are extracted from the camera which is mounted on the wheelchair. The panoramic images are streamed in real-time via a wireless network. The experimental results show that streaming speed is up to 250 KBps. The subjective quality assessments show that the received images are smooth during the streaming period. In addition, in terms of the objective image quality evaluation the average peak signal-to-noise ratio of the reconstructed images is measured to be 39.19 dB which reveals high quality of images

    Effectiveness of Multi-View Face Images and Anthropometric Data In Real-Time Networked Biometrics

    Get PDF
    Over the years, biometric systems have evolved into a reliable mechanism for establishing identity of individuals in the context of applications such as access control, personnel screening and criminal identification. However, recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans under unconstrained environments, in real-time. Specifically, the following are three critical transitions that are needed and which form the focus of this thesis: (1) In contrast to operation in an offline mode using previously acquired photographs and videos obtained under controlled environments, it is required that identification be performed in a real-time dynamic mode using images that are continuously streaming in, each from a potentially different view (front, profile, partial profile) and with different quality (pose and resolution). (2) While different multi-modal fusion techniques have been developed to improve system accuracy, these techniques have mainly focused on combining the face biometrics with modalities such as iris and fingerprints that are more reliable but require user cooperation for acquisition. In contrast, the challenge in a real-time networked biometric system is that of combining opportunistically captured multi-view facial images along with soft biometric traits such as height, gait, attire and color that do not require user cooperation. (3) Typical operation is expected to be in an open-set mode where the number of subjects that enrolled in the system is much smaller than the number of probe subjects; yet the system is required to generate high accuracy.;To address these challenges and to make a successful transition to real-time human identification systems, this thesis makes the following contributions: (1) A score-based multi- modal, multi-sample fusion technique is designed to combine face images acquired by a multi-camera network and the effectiveness of opportunistically acquired multi-view face images using a camera network in improving the identification performance is characterized; (2) The multi-view face acquisition system is complemented by a network of Microsoft Kinects for extracting human anthropometric features (specifically height, shoulder width and arm length). The score-fusion technique is augmented to utilize human anthropometric data and the effectiveness of this data is characterized. (3) The performance of the system is demonstrated using a database of 51 subjects collected using the networked biometric data acquisition system.;Our results show improved recognition accuracy when face information from multiple views is utilized for recognition and also indicate that a given level of accuracy can be attained with fewer probe images (lesser time) when compared with a uni-modal biometric system

    A study of smart device-based mobile imaging and implementation for engineering applications

    Get PDF
    Title from PDF of title page, viewed on June 12, 2013Thesis advisor: ZhiQiang ChenVitaIncludes bibliographic references (pages 76-82)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013Mobile imaging has become a very active research topic in recent years thanks to the rapid development of computing and sensing capabilities of mobile devices. This area features multi-disciplinary studies of mobile hardware, imaging sensors, imaging and vision algorithms, wireless network and human-machine interface problems. Due to the limitation of computing capacity that early mobile devices have, researchers proposed client-server module, which push the data to more powerful computing platforms through wireless network, and let the cloud or standalone servers carry out all the computing and processing work. This thesis reviewed the development of mobile hardware and software platform, and the related research done on mobile imaging for the past 20 years. There are several researches on mobile imaging, but few people aim at building a framework which helps engineers solving problems by using mobile imaging. With higher-resolution imaging and high-performance computing power built into smart mobile devices, more and more imaging processing tasks can be achieved on the device rather than the client-server module. Based on this fact, a framework of collaborative mobile imaging is introduced for civil infrastructure condition assessment to help engineers solving technical challenges. Another contribution in this thesis is applying mobile imaging application into home automation. E-SAVE is a research project focusing on extensive use of automation in conserving and using energy wisely in home automation. Mobile users can view critical information such as energy data of the appliances with the help of mobile imaging. OpenCV is an image processing and computer vision library. The applications in this thesis use functions in OpenCV including camera calibration, template matching, image stitching and Canny edge detection. The application aims to help field engineers is interactive crack detection. The other one uses template matching to recognize appliances in the home automation system.Introduction -- Background and related work -- Basic imaging processing methods for mobile applications -- Collaborative and interactive mobile imaging -- Mobile imaging for smart energy -- Conclusion and recommendation
    • …
    corecore