290,662 research outputs found
Face Video Competition
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-01793-3_73Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realise facial video recognition, rather than resorting to just still images. In fact, facial video recognition offers many advantages over still image recognition; these include the potential of boosting the system accuracy and deterring spoof attacks. This paper presents the first known benchmarking effort of person identity verification using facial video data. The evaluation involves 18 systems submitted by seven academic institutes.The work of NPoh is supported by the advanced researcher fellowship PA0022121477of the Swiss NSF; NPoh, CHC and JK by the EU-funded Mobio project grant IST-214324; NPC and HF by the EPSRC grants EP/D056942 and EP/D054818; VS andNP by the Slovenian national research program P2-0250(C) Metrology and Biomet-ric System, the COST Action 2101 and FP7-217762 HIDE; and, AAS by the Dutch BRICKS/BSIK project.Poh, N.; Chan, C.; Kittler, J.; Marcel, S.; Mc Cool, C.; Rua, E.; Alba Castro, J.... (2009). Face Video Competition. En Advances in Biometrics: Third International Conference, ICB 2009, Alghero, Italy, June 2-5, 2009. Proceedings. 715-724. https://doi.org/10.1007/978-3-642-01793-3_73S715724Messer, K., Kittler, J., Sadeghi, M., Hamouz, M., Kostyn, A., Marcel, S., Bengio, S., Cardinaux, F., Sanderson, C., Poh, N., Rodriguez, Y., Kryszczuk, K., Czyz, J., Vandendorpe, L., Ng, J., Cheung, H., Tang, B.: Face authentication competition on the BANCA database. In: Zhang, D., Jain, A.K. (eds.) ICBA 2004. LNCS, vol. 3072, pp. 8–15. Springer, Heidelberg (2004)Messer, K., Kittler, J., Sadeghi, M., Hamouz, M., Kostin, A., Cardinaux, F., Marcel, S., Bengio, S., Sanderson, C., Poh, N., Rodriguez, Y., Czyz, J., Vandendorpe, L., McCool, C., Lowther, S., Sridharan, S., Chandran, V., Palacios, R.P., Vidal, E., Bai, L., Shen, L.-L., Wang, Y., Yueh-Hsuan, C., Liu, H.-C., Hung, Y.-P., Heinrichs, A., Muller, M., Tewes, A., vd Malsburg, C., Wurtz, R., Wang, Z., Xue, F., Ma, Y., Yang, Q., Fang, C., Ding, X., Lucey, S., Goss, R., Schneiderman, H.: Face authentication test on the BANCA database. In: Int’l. Conf. Pattern Recognition (ICPR), vol. 4, pp. 523–532 (2004)Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang, J., Hoffman, K., Marques, J., Min, J., Worek, W.: Overview of the Face Recognition Grand Challenge. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 947–954 (2005)Bailly-Baillière, E., Bengio, S., Bimbot, F., Hamouz, M., Kittler, J., Marithoz, J., Matas, J., Messer, K., Popovici, V., Porée, F., Ruiz, B., Thiran, J.-P.: The BANCA Database and Evaluation Protocol. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688. Springer, Heidelberg (2003)Turk, M., Pentland, A.: Eigenfaces for Recognition. Journal of Cognitive Neuroscience 3(1), 71–86 (1991)Martin, A., Doddington, G., Kamm, T., Ordowsk, M., Przybocki, M.: The DET Curve in Assessment of Detection Task Performance. In: Proc. Eurospeech 1997, Rhodes, pp. 1895–1898 (1997)Bengio, S., Marithoz, J.: The Expected Performance Curve: a New Assessment Measure for Person Authentication. In: The Speaker and Language Recognition Workshop (Odyssey), Toledo, pp. 279–284 (2004)Poh, N., Bengio, S.: Database, Protocol and Tools for Evaluating Score-Level Fusion Algorithms in Biometric Authentication. Pattern Recognition 39(2), 223–233 (2005)Martin, A., Przybocki, M., Campbell, J.P.: The NIST Speaker Recognition Evaluation Program, ch. 8. Springer, Heidelberg (2005
Recommended from our members
Student Research Showdown: A Research Communication Competition
Student researchers are rarely trained to explain their work
to a general audience but must do so throughout their
careers. To assist undergraduate researchers in building
this skill, the Student Research Showdown—a research
video and presentation competition—was created at the
University of Texas at Austin. Students create brief videos
on which their peers vote, and the top video creators face
off with presentations and are awarded prizes by a panel
of judges. Students reflect on their experiential learning as
they construct a narrative that disseminates their findings,
communicates impact, and serves as a sharable testament
to their success. Indirect measures indicate that students
improve their research communication skills by participating
in this event.Undergraduate Studie
The 2nd 3D Face Alignment In The Wild Challenge (3DFAW-video): Dense Reconstruction From Video
3D face alignment approaches have strong advantages over 2D with respect to representational power and robustness to illumination and pose. Over the past few years, a number of research groups have made rapid advances in dense 3D alignment from 2D video and obtained impressive results. How these various methods compare is relatively unknown. Previous benchmarks addressed sparse 3D alignment and single image 3D reconstruction. No commonly accepted evaluation protocol exists for dense 3D face reconstruction from video with which to compare them. The 2nd 3D Face Alignment in the Wild from Videos (3DFAW-Video) Challenge extends the previous 3DFAW 2016 competition to the estimation of dense 3D facial structure from video. It presented a new large corpora of profile-to-profile face videos recorded under different imaging conditions and annotated with corresponding high-resolution 3D ground truth meshes. In this paper we outline the evaluation protocol, the data used, and the results. 3DFAW-Video is to be held in conjunction with the 2019 International Conference on Computer Vision, in Seoul, Korea
Combining Deep Facial and Ambient Features for First Impression Estimation
14th European Conference on Computer Vision (ECCV) -- OCT 08-16, 2016 -- Amsterdam, NETHERLANDSFirst impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to a Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the Big Five personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition
Recommended from our members
Written Evidence Submission - House of Lords Communications Committee: Public Service Broadcasting in the Age of Video on Demand
I consider this Inquiry important and relevant as the successful UK Public Service Broadcasters BBC, ITV, C4, C5 and S4C are currently facing major challenges from Video on Demand (VoD) services. These challenges primarily concern competition for content from VoD services in a highly competitive broadcasting market characterised by shifts in audience behaviour. Audiences are watching less scheduled TV as they are attracted by the business model of global streaming services like YouTube, Amazon Prime Video and Netflix. Fierce competition from mainly US-based, unregulated global VoD players investing billions of pounds in content has escalated programming costs and made it difficult for tightly regulated PSBs with modest domestic UK budgets to compete. The BBC is facing unprecedented pressure regarding its licence fee income and commercial UK PSBs face pressure concerning their funding market models as advertising money is continuously diverted online to new streaming services
The elements and tactics of Finnish negotiators in international business negotiations : the impact of face-to-face and video negotiation
Due to increased globalization and resulting intense competition, more and more companies are entering into international business. Entering international business through export modes, as well as through intermediate and joint venture modes all involve negotiations with business partners. Prior research on international business negotiations (IBNs) has increased our understanding about the impact of culture on IBN strategies, choice of communication mode (face-to-face vs. video) in IBNs and their associated advantages and disadvantages. The massive use of digital tools for conducting IBNs since the outbreak of COVID-19 pandemic has increased the need to understand the impact of communication mode (face-to-face vs. video) on the elements and tactics of negotiators involved in IBNs. However, there is no prior understanding about the impact of communication mode (face-to-face vs. video) on the elements and tactics of negotiators involved in IBNs. Therefore, the purpose of this thesis is to explore the role of communication mode on elements and tactics of Finnish negotiators involved in IBNs.
The theoretical framework of this thesis is developed by integrating the communication modes, IBN tactics, and Salacuse’s model of ten negotiation elements. The developed framework is tested by using web-survey data collected from twenty-five executives of Finnish companies who were involved in both face-to-face and online IBNs. The empirical data was further analyzed using t-test with the help of statistical testing, namely SPSS.
The results indicate that Finnish negotiators’ elements (i.e. strategies) of negotiation goal, personal style, emotionalism, risk and trust on one hand, and tactics of information exchange on another significantly differ between face-to-face and video IBNs. Finnish negotiators focus more on relationship building, express more emotions, communicate more informally, trust more and take more risk in face-to-face IBNs than in video IBNs. However, Finnish negotiators use more information exchange tactics in video IBNs than in face-to-face IBNs.
These findings have important implications for Eastern and Western negotiators for understand-ing the strategies and tactics of Finnish negotiators in face-to-face vs. video IBNs, and they aim to fill the existing research gap in that part.Kiristyneestä kilpailusta ja globalisaatiosta johtuen, yritykset ovat yhä enenevämmässä määrin siirtyneet tekemään kauppaa myös kotimaan rajojen ulkopuolelle. Harjoitti yritys kansainvälistä kauppaa sitten millä tavalla tai missä kanavassa tahansa, vaatii se lähes poikkeuksetta neuvottelemista ulkomaisten kumppaneiden kanssa. Aikaisemmat tutkimukset koskien kansainvälisen kaupan neuvotteluja, ovat koskeneet kulttuurin vaikutusta neuvottelustrategioihin, viestintävälineiden (henkilökohtaiset tapaamiset vs. videoneuvottelut) merkitystä neuvotteluissa sekä niiden myönteisiä ja kielteisiä puolia. Koronaviruspandemian puhkeaminen räjäytti digitaalisten viestintävälineiden käytön uusiin mittakaavoihin, jonka seurauksena tarve ymmärtää eri viestintävälineiden vaikutusta kansainvälisen kaupan parissa toimivien neuvottelijoiden käyttämiin neuvottelustrategioihin ja -taktiikoihin on kasvanut entisestään. Koska tästä vaikutuksesta ei ole kuitenkaan selkeää ymmärrystä, tämän työn tavoitteena on tutkia näiden viestintävälineiden vaikutusta suomalaisten liikeneuvottelijoiden käyttämiin neuvottelutaktiikoihin ja -strategioihin.
Tämä tutkimustyö tutkii ja yhdistää kaksi eri viestintävälinettä (henkilökohtaiset tapaamiset sekä videokanavat), neuvottelutaktiikat, sekä Salacusen tutkimustyön pohjalta kehitetyn mallin kymmenestä eri neuvotteluelementistä. Teoreettisen viitekehyksen pohjalta kehitettyjen hypoteesien pitävyyttä tutkittiin kvantitatiivisessa tutkimuksessa. Sähköpostikyselyn avulla saatiin kerättyä 25:n eri suomalaisen liikeneuvottelijan vastaukset. Kerätty aineisto analysoitiin tilastotieteen ohjelmalla (SPSS) käyttäen t-testi -menetelmää.
Tutkimustulokset osoittivat, että suomalaisten liikeneuvottelijoiden neuvottelustrategiat koskien neuvottelutavoitetta, esiintymistapaa, tunteiden näyttämistä, riskiä ja luottamusta, sekä tiedon jakaminen neuvottelutaktiikkana vaihtelivat merkittävästi eri neuvottelukanavissa (kasvotusten vs. video). Suomalaiset neuvottelijat keskittyivät huomattavasti enemmän kumppanuuden rakentamiseen, näyttivät enemmän tunteita, käyttäytyivät vapaamuotoisemmin, luottivat enemmän vastapuoleen, sekä olivat valmiimpia ottamaan enemmän riskejä tavatessaan neuvottelukumppaninsa kasvotusten kuin neuvotellessaan videon välityksellä. Kuitenkin suomalaiset liikeneuvottelijat jakoivat avoimemmin ja enemmän tietoa videon välityksellä kuin kasvotusten.
Vaikka tämä tutkimustyö ei voi tarjota kaikenkattavaa yleistystä suomalaisten liikeneuvottelijoiden käyttäytymisestä eri viestintävälineissä, se pyrkii osin täyttämään olemassa olevaa tutkimusvajetta. Tämä tutkimustyö antaa merkityksellisiä suuntaviivoja viestintävälineiden mahdollisesta vaikutuksesta suomalaisiin neuvottelijoihin paitsi neuvottelijoille itselleen, mutta myös vasta-neuvottelijoille, sekä yritysjohdolle
EmoNets: Multimodal deep learning approaches for emotion recognition in video
The task of the emotion recognition in the wild (EmotiW) Challenge is to
assign one of seven emotions to short video clips extracted from Hollywood
style movies. The videos depict acted-out emotions under realistic conditions
with a large degree of variation in attributes such as pose and illumination,
making it worthwhile to explore approaches which consider combinations of
features from multiple modalities for label assignment. In this paper we
present our approach to learning several specialist models using deep learning
techniques, each focusing on one modality. Among these are a convolutional
neural network, focusing on capturing visual information in detected faces, a
deep belief net focusing on the representation of the audio stream, a K-Means
based "bag-of-mouths" model, which extracts visual features around the mouth
region and a relational autoencoder, which addresses spatio-temporal aspects of
videos. We explore multiple methods for the combination of cues from these
modalities into one common classifier. This achieves a considerably greater
accuracy than predictions from our strongest single-modality classifier. Our
method was the winning submission in the 2013 EmotiW challenge and achieved a
test set accuracy of 47.67% on the 2014 dataset
DFGC 2022: The Second DeepFake Game Competition
This paper presents the summary report on our DFGC 2022 competition. The
DeepFake is rapidly evolving, and realistic face-swaps are becoming more
deceptive and difficult to detect. On the contrary, methods for detecting
DeepFakes are also improving. There is a two-party game between DeepFake
creators and defenders. This competition provides a common platform for
benchmarking the game between the current state-of-the-arts in DeepFake
creation and detection methods. The main research question to be answered by
this competition is the current state of the two adversaries when competed with
each other. This is the second edition after the last year's DFGC 2021, with a
new, more diverse video dataset, a more realistic game setting, and more
reasonable evaluation metrics. With this competition, we aim to stimulate
research ideas for building better defenses against the DeepFake threats. We
also release our DFGC 2022 dataset contributed by both our participants and
ourselves to enrich the DeepFake data resources for the research community
(https://github.com/NiCE-X/DFGC-2022).Comment: Accepted by IJCB 202
Report on the BTAS 2016 Video Person Recognition Evaluation
© 2016 IEEE. This report presents results from the Video Person Recognition Evaluation held in conjunction with the 8th IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS). Two experiments required algorithms to recognize people in videos from the Point-and-Shoot Face Recognition Challenge Problem (PaSC). The first consisted of videos from a tripod mounted high quality video camera. The second contained videos acquired from 5 different handheld video cameras. There were 1,401 videos in each experiment of 265 subjects. The subjects, the scenes, and the actions carried out by the people are the same in both experiments. An additional experiment required algorithms to recognize people in videos from the Video Database of Moving Faces and People (VDMFP). There were 958 videos in this experiment of 297 subjects. Four groups from around the world participated in the evaluation. The top verification rate for PaSC from this evaluation is 0.98 at a false accept rate of 0.01 - a remarkable advancement in performance from the competition held at FG 2015
- …