1,779 research outputs found

    An Application of Deep-Learning to Understand Human Perception of Art

    Get PDF
    Eye movement patterns are known to differ when looking at stimuli given a different task, but less is known about how these patterns change as a function of expertise. When a particular visual pattern is viewed, a particular sequence of eye movements are executed and this sequence is defined as scanpath. In this work we made an attempt to answer the question, “Do art novices and experts look at paintings differently?” If they do, we should be able to discriminate between the two groups using machine learning applied to their scanpaths. This can be done using algorithms for Multi-Fixation Pattern Analyses (MFPA). MFPA is a family of machine learning algorithms for making inferences about people from their gaze patterns. MFPA and related approaches have been widely used to study viewing behavior while performing visual tasks, but earlier approaches only used gaze position (x, y) information with duration and temporal order and not the actual visual features in the image. In this work, we extend MFPA algorithms to use visual features in trying to answer a question that has been overlooked by most early studies, i.e. if there is a difference found between experts and novices, how different are their viewing patterns and do these differences exist for both low- and high-level image features. To address this, we combined MFPA with a deep Convolutional Neural Network (CNN). Instead of converting a trial’s 2-D fixation positions into Fisher Vectors, we extracted image features surrounding the fixations using a deep CNN and turn them into Fisher Vectors for a trial. The Fisher Vector is an image representation obtained by pooling local image features. It is frequently used as a global image descriptor in visual classification. We call this approach MFPA-CNN. While CNNs have been previously used to recognize and classify objects from paintings, this work goes the extra step to study human perception of paintings. Ours is the first attempt to use MFPA and CNNs to study the viewing patterns of the subjects in the field of art. If our approach is successful in differentiating novices from experts with and without instructions when both low- and high-level CNN image features were used, we could then demonstrate that novices and experts view art differently. The outcome of this study could be then used to further investigate what image features the subjects are concentrating on. We expect this work to influence further research in image perception and experimental aesthetics

    Recognition, Analysis, and Assessments of Human Skills using Wearable Sensors

    Get PDF
    One of the biggest social issues in mature societies such as Europe and Japan is the aging population and declining birth rate. These societies have a serious problem with the retirement of the expert workers, doctors, and engineers etc. Especially in the sectors that require long time to make experts in fields like medicine and industry; the retirement and injuries of the experts, is a serious problem. The technology to support the training and assessment of skilled workers (like doctors, manufacturing workers) is strongly required for the society. Although there are some solutions for this problem, most of them are video-based which violates the privacy of the subjects. Furthermore, they are not easy to deploy due to the need for large training data. This thesis provides a novel framework to recognize, analyze, and assess human skills with minimum customization cost. The presented framework tackles this problem in two different domains, industrial setup and medical operations of catheter-based cardiovascular interventions (CBCVI). In particular, the contributions of this thesis are four-fold. First, it proposes an easy-to-deploy framework for human activity recognition based on zero-shot learning approach, which is based on learning basic actions and objects. The model recognizes unseen activities by combinations of basic actions learned in a preliminary way and involved objects. Therefore, it is completely configurable by the user and can be used to detect completely new activities. Second, a novel gaze-estimation model for attention driven object detection task is presented. The key features of the model are: (i) usage of the deformable convolutional layers to better incorporate spatial dependencies of different shapes of objects and backgrounds, (ii) formulation of the gaze-estimation problem in two different way, as a classification as well as a regression problem. We combine both formulations using a joint loss that incorporates both the cross-entropy as well as the mean-squared error in order to train our model. This enhanced the accuracy of the model from 6.8 by using only the cross-entropy loss to 6.4 for the joint loss. The third contribution of this thesis targets the area of quantification of quality of i actions using wearable sensor. To address the variety of scenarios, we have targeted two possibilities: a) both expert and novice data is available , b) only expert data is available, a quite common case in safety critical scenarios. Both of the developed methods from these scenarios are deep learning based. In the first one, we use autoencoders with OneClass SVM, and in the second one we use the Siamese Networks. These methods allow us to encode the expert’s expertise and to learn the differences between novice and expert workers. This enables quantification of the performance of the novice in comparison to the expert worker. The fourth contribution, explicitly targets medical practitioners and provides a methodology for novel gaze-based temporal spatial analysis of CBCVI data. The developed methodology allows continuous registration and analysis of gaze data for analysis of the visual X-ray image processing (XRIP) strategies of expert operators in live-cases scenarios and may assist in transferring experts’ reading skills to novices
    • …
    corecore