1,827 research outputs found

    How Image Degradations Affect Deep CNN-based Face Recognition?

    Full text link
    Face recognition approaches that are based on deep convolutional neural networks (CNN) have been dominating the field. The performance improvements they have provided in the so called in-the-wild datasets are significant, however, their performance under image quality degradations have not been assessed, yet. This is particularly important, since in real-world face recognition applications, images may contain various kinds of degradations due to motion blur, noise, compression artifacts, color distortions, and occlusion. In this work, we have addressed this problem and analyzed the influence of these image degradations on the performance of deep CNN-based face recognition approaches using the standard LFW closed-set identification protocol. We have evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and GoogLeNet. Results have indicated that blur, noise, and occlusion cause a significant decrease in performance, while deep CNN models are found to be robust to distortions, such as color distortions and change in color balance.Comment: 8 pages, 3 figure

    Human-centric light sensing and estimation from RGBD images: the invisible light switch

    Get PDF
    Lighting design in indoor environments is of primary importance for at least two reasons: 1) people should perceive an adequate light; 2) an effective lighting design means consistent energy saving. We present the Invisible Light Switch (ILS) to address both aspects. ILS dynamically adjusts the room illumination level to save energy while maintaining constant the light level perception of the users. So the energy saving is invisible to them. Our proposed ILS leverages a radiosity model to estimate the light level which is perceived by a person within an indoor environment, taking into account the person position and her/his viewing frustum (head pose). ILS may therefore dim those luminaires, which are not seen by the user, resulting in an effective energy saving, especially in large open offices (where light may otherwise be ON everywhere for a single person). To quantify the system performance, we have collected a new dataset where people wear luxmeter devices while working in office rooms. The luxmeters measure the amount of light (in Lux) reaching the people gaze, which we consider a proxy to their illumination level perception. Our initial results are promising: in a room with 8 LED luminaires, the energy consumption in a day may be reduced from 18585 to 6206 watts with ILS (currently needing 1560 watts for operations). While doing so, the drop in perceived lighting decreases by just 200 lux, a value considered negligible when the original illumination level is above 1200 lux, as is normally the case in offices

    Face Video Competition

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-01793-3_73Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realise facial video recognition, rather than resorting to just still images. In fact, facial video recognition offers many advantages over still image recognition; these include the potential of boosting the system accuracy and deterring spoof attacks. This paper presents the first known benchmarking effort of person identity verification using facial video data. The evaluation involves 18 systems submitted by seven academic institutes.The work of NPoh is supported by the advanced researcher fellowship PA0022121477of the Swiss NSF; NPoh, CHC and JK by the EU-funded Mobio project grant IST-214324; NPC and HF by the EPSRC grants EP/D056942 and EP/D054818; VS andNP by the Slovenian national research program P2-0250(C) Metrology and Biomet-ric System, the COST Action 2101 and FP7-217762 HIDE; and, AAS by the Dutch BRICKS/BSIK project.Poh, N.; Chan, C.; Kittler, J.; Marcel, S.; Mc Cool, C.; Rua, E.; Alba Castro, J.... (2009). Face Video Competition. En Advances in Biometrics: Third International Conference, ICB 2009, Alghero, Italy, June 2-5, 2009. Proceedings. 715-724. https://doi.org/10.1007/978-3-642-01793-3_73S715724Messer, K., Kittler, J., Sadeghi, M., Hamouz, M., Kostyn, A., Marcel, S., Bengio, S., Cardinaux, F., Sanderson, C., Poh, N., Rodriguez, Y., Kryszczuk, K., Czyz, J., Vandendorpe, L., Ng, J., Cheung, H., Tang, B.: Face authentication competition on the BANCA database. In: Zhang, D., Jain, A.K. (eds.) ICBA 2004. LNCS, vol. 3072, pp. 8–15. Springer, Heidelberg (2004)Messer, K., Kittler, J., Sadeghi, M., Hamouz, M., Kostin, A., Cardinaux, F., Marcel, S., Bengio, S., Sanderson, C., Poh, N., Rodriguez, Y., Czyz, J., Vandendorpe, L., McCool, C., Lowther, S., Sridharan, S., Chandran, V., Palacios, R.P., Vidal, E., Bai, L., Shen, L.-L., Wang, Y., Yueh-Hsuan, C., Liu, H.-C., Hung, Y.-P., Heinrichs, A., Muller, M., Tewes, A., vd Malsburg, C., Wurtz, R., Wang, Z., Xue, F., Ma, Y., Yang, Q., Fang, C., Ding, X., Lucey, S., Goss, R., Schneiderman, H.: Face authentication test on the BANCA database. In: Int’l. Conf. Pattern Recognition (ICPR), vol. 4, pp. 523–532 (2004)Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang, J., Hoffman, K., Marques, J., Min, J., Worek, W.: Overview of the Face Recognition Grand Challenge. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 947–954 (2005)Bailly-Baillière, E., Bengio, S., Bimbot, F., Hamouz, M., Kittler, J., Marithoz, J., Matas, J., Messer, K., Popovici, V., Porée, F., Ruiz, B., Thiran, J.-P.: The BANCA Database and Evaluation Protocol. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688. Springer, Heidelberg (2003)Turk, M., Pentland, A.: Eigenfaces for Recognition. Journal of Cognitive Neuroscience 3(1), 71–86 (1991)Martin, A., Doddington, G., Kamm, T., Ordowsk, M., Przybocki, M.: The DET Curve in Assessment of Detection Task Performance. In: Proc. Eurospeech 1997, Rhodes, pp. 1895–1898 (1997)Bengio, S., Marithoz, J.: The Expected Performance Curve: a New Assessment Measure for Person Authentication. In: The Speaker and Language Recognition Workshop (Odyssey), Toledo, pp. 279–284 (2004)Poh, N., Bengio, S.: Database, Protocol and Tools for Evaluating Score-Level Fusion Algorithms in Biometric Authentication. Pattern Recognition 39(2), 223–233 (2005)Martin, A., Przybocki, M., Campbell, J.P.: The NIST Speaker Recognition Evaluation Program, ch. 8. Springer, Heidelberg (2005

    On Designing Tattoo Registration and Matching Approaches in the Visible and SWIR Bands

    Get PDF
    Face, iris and fingerprint based biometric systems are well explored areas of research. However, there are law enforcement and military applications where neither of the aforementioned modalities may be available to be exploited for human identification. In such applications, soft biometrics may be the only clue available that can be used for identification or verification purposes. Tattoo is an example of such a soft biometric trait. Unlike face-based biometric systems that used in both same-spectral and cross-spectral matching scenarios, tattoo-based human identification is still a not fully explored area of research. At this point in time there are no pre-processing, feature extraction and matching algorithms using tattoo images captured at multiple bands. This thesis is focused on exploring solutions on two main challenging problems. The first one is cross-spectral tattoo matching. The proposed algorithmic approach is using as an input raw Short-Wave Infrared (SWIR) band tattoo images and matches them successfully against their visible band counterparts. The SWIR tattoo images are captured at 1100 nm, 1200 nm, 1300 nm, 1400 nm and 1500 nm. After an empirical study where multiple photometric normalization techniques were used to pre-process the original multi-band tattoo images, only one was determined to significantly improve cross spectral tattoo matching performance. The second challenging problem was to develop a fully automatic visible-based tattoo image registration system based on SIFT descriptors and the RANSAC algorithm with a homography model. The proposed automated registration approach significantly improves the operational cost of a tattoo image identification system (using large scale tattoo image datasets), where the alignment of a pair of tattoo images by system operators needs to be performed manually. At the same time, tattoo matching accuracy is also improved (before vs. after automated alignment) by 45.87% for the NIST-Tatt-C database and 12.65% for the WVU-Tatt database
    • …
    corecore