16 research outputs found

    WSD: Wild Selfie Dataset for Face Recognition in Selfie Images

    Full text link
    With the rise of handy smart phones in the recent years, the trend of capturing selfie images is observed. Hence efficient approaches are required to be developed for recognising faces in selfie images. Due to the short distance between the camera and face in selfie images, and the different visual effects offered by the selfie apps, face recognition becomes more challenging with existing approaches. A dataset is needed to be developed to encourage the study to recognize faces in selfie images. In order to alleviate this problem and to facilitate the research on selfie face images, we develop a challenging Wild Selfie Dataset (WSD) where the images are captured from the selfie cameras of different smart phones, unlike existing datasets where most of the images are captured in controlled environment. The WSD dataset contains 45,424 images from 42 individuals (i.e., 24 female and 18 male subjects), which are divided into 40,862 training and 4,562 test images. The average number of images per subject is 1,082 with minimum and maximum number of images for any subject are 518 and 2,634, respectively. The proposed dataset consists of several challenges, including but not limited to augmented reality filtering, mirrored images, occlusion, illumination, scale, expressions, view-point, aspect ratio, blur, partial faces, rotation, and alignment. We compare the proposed dataset with existing benchmark datasets in terms of different characteristics. The complexity of WSD dataset is also observed experimentally, where the performance of the existing state-of-the-art face recognition methods is poor on WSD dataset, compared to the existing datasets. Hence, the proposed WSD dataset opens up new challenges in the area of face recognition and can be beneficial to the community to study the specific challenges related to selfie images and develop improved methods for face recognition in selfie images

    Gender recognition from unconstrained selfie images: a convolutional neural network approach

    Get PDF
    Human gender recognition is an essential demographic tool. This is reflected in forensic science, surveillance systems and targeted marketing applications. This research was always driven using standard face images and hand-crafted features. Such way has achieved good results, however, the reliability of the facial images had a great effect on the robustness of extracted features, where any small change in the query facial image could change the results. Nevertheless, the performance of current techniques in unconstrained environments is still inefficient, especially when contrasted against recent breakthroughs in different computer vision research. This paper introduces a novel technique for human gender recognition from non-standard selfie images using deep learning approaches. Selfie photos are uncontrolled partial or full-frontal body images that are usually taken by people themselves in real-life environment. As far as we know this is the first paper of its kind to identify gender from selfie photos, using deep learning approach. The experimental results on the selfie dataset emphasizes the proposed technique effectiveness in recognizing gender from such images with 89% accuracy. The performance is further consolidated by testing on numerous benchmark datasets that are widely used in the field, namely: Adience, LFW, FERET, NIVE, Caltech WebFaces andCAS-PEAL-R1

    Public mirror: legitimizing 'social' photography as a contemporary discipline

    Get PDF
    With all the public information about any famous person, topic or event 'googleable’ on the Internet, there seems to be nothing new for 'digital natives’ to discover other than the elusive Self. The Self is the 'new frontier’ and the smartphone camera is at the forefront of this quest, unearthing and exhibiting different kinds of content everyday. With over 95 million photographs and videos shared on Instagram daily; Photography has merged with social networking sites and applications (SNS/A) to become a recognisable phenomenon called – 'Social’ Photography. Despite its rich association with legitimate visual art-forms and numerous scholarly articles examining it’s various forms – the term 'Social’ Photography is unfamiliar to most. This inquiry discusses 'Social’ Photography in relation to existing literature to argue for its establishment as a legitimate discipline within the Creative Arts. By acknowledging its subjectivity and utilization of digital technologies, this study employed an interpretive group of methods and identified six characteristics of 'Social’ Photography – namely, (i) Activity, (ii) Participation, (iii) Identity, (iv) Glamour, (v) Protest, and (vi) Spectacle – that exemplify its capacity to curate a meaningful democratic public image. These six aspects can be used to categorize and formalize individual behaviour that can be analysed and interpreted to foster a better understanding of 'Social’ Photography as a discipline

    Describing Images by Semantic Modeling using Attributes and Tags

    Get PDF
    This dissertation addresses the problem of describing images using visual attributes and textual tags, a fundamental task that narrows down the semantic gap between the visual reasoning of humans and machines. Automatic image annotation assigns relevant textual tags to the images. In this dissertation, we propose a query-specific formulation based on Weighted Multi-view Non-negative Matrix Factorization to perform automatic image annotation. Our proposed technique seamlessly adapt to the changes in training data, naturally solves the problem of feature fusion and handles the challenge of the rare tags. Unlike tags, attributes are category-agnostic, hence their combination models an exponential number of semantic labels. Motivated by the fact that most attributes describe local properties, we propose exploiting localization cues, through semantic parsing of human face and body to improve person-related attribute prediction. We also demonstrate that image-level attribute labels can be effectively used as weak supervision for the task of semantic segmentation. Next, we analyze the Selfie images by utilizing tags and attributes. We collect the first large-scale Selfie dataset and annotate it with different attributes covering characteristics such as gender, age, race, facial gestures, and hairstyle. We then study the popularity and sentiments of the selfies given an estimated appearance of various semantic concepts. In brief, we automatically infer what makes a good selfie. Despite its extensive usage, the deep learning literature falls short in understanding the characteristics and behavior of the Batch Normalization. We conclude this dissertation by providing a fresh view, in light of information geometry and Fisher kernels to why the batch normalization works. We propose Mixture Normalization that disentangles modes of variation in the underlying distribution of the layer outputs and confirm that it effectively accelerates training of different batch-normalized architectures including Inception-V3, Densely Connected Networks, and Deep Convolutional Generative Adversarial Networks while achieving better generalization error

    Self-denigration and the mixed messages of 'ugly' selfies in Instagram

    Get PDF

    Real-Time Smile Detection using Deep Learning

    Get PDF
    Real-time smile detection from facial images is useful in many real world applications such as automatic photo capturing in mobile phone cameras or interactive distance learning. In this paper, we study different architectures of object detection deep networks for solving real-time smile detection problem. We then propose a combination of a lightweight convolutional neural network architecture (BKNet) with an efficient object detection framework (RetinaNet). The evaluation on the two datasets (GENKI-4K, UCF Selfie) with a mid-range hardware device (GTX TITAN Black) show that our proposed method helps in improving both accuracy and inference time of the original RetinaNet to reach real-time performance. In comparison with the state-of-the-art object detection framework (YOLO), our method has higher inference time, but still reaches real-time performance and obtains higher accuracy of smile detection on both experimented datasets

    BodyVerse

    Full text link
    This paper supports the MFA dance thesis film BodyVerse. Exploring the intertwining relationship of body systems with the natural world, it brings somatic principles such as Body Mind Centering and dance improvisation together with film legacies and digital platforms

    Интенсивный курс английского языка (Уровень А2)

    Get PDF
    Гриф.Интенсивный курс английского языка для обучающихсянеязыковых специальностей акцентирован на развитиеразговорной речи. В рамках освоения данного курса студенты учатся избегать распространенных ошибок в общении, работают над проблемными областями и различиями в устном и письменном английском языке. Курс освещает как официальный, так и неформальный язык и знакомит с современными выражениями иконструкциями, которые помогут чувствовать себя увереннее в англоязычной среде. Подготовлено на кафедре иностранных языков и профессиональной коммуникации.Используемые программы Adobe AcrobatТруды сотрудников Самар. ун-та (электрон. версия
    corecore