7 research outputs found

    Recovery of SHGCs from a single intensity view

    No full text

    Difficult Detection: A Comparison Of Two Different Approaches To Eye Detection For Unconstrained Environments

    No full text
    Eye detection is a well studied problem for the constrained face recognition problem, where we find controlled distances, lighting, and limited pose variation. A far more difficult scenario for eye detection is the unconstrained face recognition problem, where we do not have any control over the environment or the subject. In this paper, we take a look at two different approaches for eye detection under difficult acquisition circumstances, including low-light, distance, pose variation, and blur. A new machine learning approach and several correlation filter approaches, including a new adaptive variant, are compared. We present experimental results on a variety of controlled data sets (derived from FERET and CMU PIE) that have been re-imaged under the difficult conditions of interest with an EMCCD based acquisition system. The results of our experiments show that our new detection approaches are extremely accurate under all tested conditions, and significantly improve detection accuracy compared to a leading commercial detector. This unique evaluation brings us one step closer to a better solution for the unconstrained face recognition problem. ©2009 IEEE.IEEE Systems,Man and Cybernetics Society,IEEE Biometrics CouncilB. Leite, E. Pereira, H. Gomes, L. Veloso, C. Santos and J. Carvalho, A Learning-based Eye Detector Coupled with Eye Candidate Filtering and PCA Features, In Proc. of the XX Brazilian Symposium on Computer Graphics and Image Processing, 2007Smith, L., (2002) A Tutorial on Principal Components Analysis, , http://kybele.psych.cornell.edu/~edelman/Psych-465-Spring-2003/PCA, tutorialViola, P., Jones, M., Robust Real-time Face Detection (2004) Int. Journal of Computer Vision, 57 (2), pp. 137-154Bishop, C.M., (2006) Pattern Recognition and Machine Learning, , Springer, 1st editionSim, T., Baker, S., Bsat, M., The CMU Pose, Illumination, and Expression (PIE) Database (2002) Proc. of the Fifth IEEE International Conference on Automatic Face and Gesture RecognitionBoult, T.E., Scheirer, W.J., Long Range Facial Image Acquisition and Quality (2009) Biometrics for Surveillance and Security, , M. Tistarelli, S. Li and R. Chellappa, editors, Springer-VerlagBoult, T.E., Scheirer, W.J., Woodworth, R., FAAD: Face at a Distance (2008) SPIE Defense and Security Symposium, , Orlando FL, AprilPhillips, P.J., Moon, H., Rizvi, S., Rauss, P., The FERET Evaluation Methodology for Face-Recognition Algorithms (2000) IEEE Trans. on Pattern Analysis and Machine Intelligence, 22 (10), pp. 1090-1104T. Vogelsong, T. Boult, D. Gardner, R. Woodworth, R.C. Johnson and B. Heflin, 24/7 Security System: 60-FPS Color EMCCD Camera With Integral Human Recognition, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense VI. Edited by M. Carapezza, Proceedings of the SPIE 6538 (2007), 65381SM. Savvides, B. Kumar, and P. Khosla, Corefaces - Robust Shift Invariant PCA based Correlation Filter for Illumination Tolerant Face Recognition, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2004D. Bolme, B. Draper, and J. R. Beveridge, Average of Synthetic Exact Filters, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2009. Available from: http://www.cs.colostate.edu/ ~bolme/Bolme2009Asef.pdfPhillips, P.J., Vardi, Y., Efficient Illumination Normalization of Facial Images (1996) Pattern Recognition Letters, 17 (8), pp. 921-927Chen, T., Yin, W., Zhou, X., Comaniciu, D., Huang, T., Total Variation Models for Variable Lighting Face Recognition (2006) IEEE Trans. on Pattern Analysis and Machine Intelligence, 28 (9), pp. 1519-1524Tyson, R., (2000) Introduction to adaptive optics, , SPIE, The International Society for Optical Engineering, Bellingham, WACarroll, J., Gray, D., Roorda, A., Williams, D., Recent Advances in Retinal Imaging with Adaptive Optics (2007) Optics and Photonics News, pp. 36-42Yao, Y., Abidi, B., Kalka, N., Schmid, N., Abidi, M., Improving Long Range and High Magnification Face Recognition: Database Acquisition, Evaluation, and Enhancement Comput. Vis. Image Understand, 2007. , doi:10.1016/j.cviu.2007.09.004Jin, L., Yuan, X., Satoh, S., Li, J., Xia, L., A Hybrid Classifier for Precise and Robust Eye Detection (2006) Proc. of the Int. Conf. on Pattern Recognition, 4, pp. 731-735Wang, P., Green, M.B., Ji, Q., Wayman, J., Automatic Eye Detection and Validation (2005) Proc. of the IEEE Conference on Computer Vision and Pattern RecognitionMahalanobis, A., Kumar, B., Casaent, D., Minimum Average Correlation Energy Filters (1987) Appl. Opt, 26 (17), p. 3633Savvides, M., Kumar, B., Efficient Design of Advanced Correlation Filters for Robust Distortion-Tolerant Face Recognition (2003) AVSS, pp. 45-52Brunelli, R., Poggiot, T., Template Matching: Matched Spacial Filters and Beyond (1997) Pattern Recognition, 30 (5), pp. 751-768Savvides, M., Abiantun, R., Jeo, J., Park, S., Xie, C., Kumar, B., Partial & Holistic Face Recognition on FRGC-II data using Support Vector Machine Kernel Correlation Feature Analysis (2006) IEEE Computer Society Workshop on Biometric

    Omnidirectional Video Applications

    No full text
    In the past decade there has been a significant increase in the use of omni-directional video --- video that captures information in all directions. The bulk of this research has concentrated on the use of omni-directional video for navigation and for obstacle avoidance. This paper reviews omni-directional research at the VAST lab that address other applications; in particular, we review advances in systems to address the questions " What is/was there?" (tele-observation), "Where am I?" (location determination), "Where have I been?" (textured-tube mosaicing), and "What is moving around me and where is it?" (surveillance). In the area of tele-observation, we briefly review recent results in both human factors studies on user interfaces for omni-directional imaging in Military Operations in Urban Terrain (MOUT). The study clearly demonstrated the importance of omni-directional viewing in these situations. We also review recent work on the DOVE system (Dolphin Omni-directional Video Equipment) and its evaluation. In the area of location determination, we discuss a system that uses a panoramic pyramid imager and a new color histogram-oriented representation to recognize the room in which the camera is located. Addressing the question of "Where have I been?", we introduce the idea of textured tubes and present a simple example of this mosaic computed from omni-directional video. The final area reviewed is recent advances on target detection and tracking from a stationary omni-directional camera
    corecore