363 research outputs found

    Peer assessment and knowledge discovering in a community of learners

    Get PDF
    Thanks to the exponential growth of the Internet, Distance Education is becoming more and more strategic in many fields of daily life. Its main advantage is that students can learn through appropriate web platforms that allow them to take advantage of multimedia and interactive teaching materials, without constraints neither of time nor of space. Today, in fact, the Internet offers many platforms suitable for this purpose, such as Moodle, ATutor and others. Coursera is another example of a platform that offers different courses to thousands of enrolled students. This approach to learning is, however, posing new problems such as that of the assessment of the learning status of the learner in the case where there were thousands of students following a course, as is in Massive On-line Courses (MOOC). The Peer Assessment can therefore be a solution to this problem: evaluation takes place between peers, creating a dynamic in the community of learners that evolves autonomously. In this article, we present a first step towards this direction through a peer assessment mechanism led by the teacher who intervenes by evaluating a very small part of the students. Through a mechanism based on machine learning, and in particular on a modified form of K-NN, given the teacher’s grades, the system should converge towards an evaluation that is as similar as possible to the one that the teacher would have given. An experiment is presented with encouraging results

    CABALA: Collaborative Architectures based on Biometric Adaptable Layers and Activities

    Get PDF
    The lack of communication and of dynamic adaptation to working settings often hinder stable performances of subsystems of present multibiometric architectures. The calibration phase often uses a specific training set, so that (sub)systems are tuned with respect to well determined conditions. In this work we investigate the modular construction of systems according to CABALA (Collaborative Architectures based on Biometric Adaptable Layers and Activities) approach. Different levels of flexibility and collaboration are supported. The computation of system reliability (SRR), for each single response of each single subsystem, allows to address temporary decrease of accuracy due to adverse conditions (light, dirty sensors, etc.), by possibly refusing a poorly reliable response or by asking for a new recognition operation. Subsystems can collaborate at a twofold level, both in returning a jointly determined answer, and in co-evolving to tune to changing conditions. At the first level, single-biometric subsystems implement the N-Cross Testing Protocol: they work in parallel, but exchange information to reach the final response. At an higher level of interdependency, parameters of each subsystem can be dynamically optimized according to the behavior of their companions. To this aim, an additional Supervisor Module analyzes the single results and, in our present implementation, modifies the degree of reliability required from each subsystem to accept its future responses. The paper explores different combinations of these novel strategies. We demonstrate that as component collaboration increases, the same happens to both the overall system accuracy and to the ability to identify unstable subsystems. (C) 2011 Elsevier Ltd. All rights reserved

    FARO: FAce Recognition against Occlusions and Expression Variations

    Get PDF
    FARO: FAce Recognition Against Occlusions and Expression Variations Maria De Marsico, Member, IEEE, Michele Nappi, and Daniel Riccio Abstract—Face recognition is widely considered as one of the most promising biometric techniques, allowing high recognition rates without being too intrusive. Many approaches have been presented to solve this special pattern recognition problem, also addressing the challenging cases of face changes, mainly occurring in expression, illumination, or pose. On the other hand, less work can be found in literature that deals with partial occlusions (i.e., sunglasses and scarves). This paper presents FAce Recognition against Occlusions and Expression Variations (FARO) as a new method based on partitioned iterated function systems (PIFSs), which is quite robust with respect to expression changes and partial occlusions. In general, algorithms based on PIFSs compute a map of self-similarities inside the whole input image, searching for correspondences among small square regions. However, traditional algorithms of this kind suffer from local distortions such as occlusions. To overcome such limitation, information extracted by PIFS is made local by working independently on each face component (eyes, nose, and mouth). Distortions introduced by likely occlusions or expression changes are further reduced by means of an ad hoc distance measure. In order to experimentally confirm the robustness of the proposed method to both lighting and expression variations, as well as to occlusions, FARO has been tested using AR-Faces database, one of the main benchmarks for the scientific community in this context. A further validation of FARO performances is provided by the experimental results produced on Face Recognition Grand Challenge database

    Entropy Based Template Analysis in Face Biometric Identification Systems

    Get PDF
    The accuracy of a biometric matching algorithm relies on its ability to better separate score distributions for genuine and impostor subjects. However, capture conditions (e.g. illumination or acquisition devices) as well as factors related to the subject at hand (e.g. pose or occlusions) may even take a generally accurate algorithm to provide incorrect answers. Techniques for face classification are still too sensitive to image distortion, and this limit hinders their use in large-scale commercial applications, which are typically run in uncontrolled settings. This paper will join the notion of quality with the further interesting concept of representativeness of a biometric sample, taking into account the case of more samples per subject. Though being of excellent quality, the gallery samples belonging to a certain subject might be very (too much) similar among them, so that even a moderately different sample of the same subject in input will cause an error. This seems to indicate that quality measures alone are not able to guarantee good performances. In practice, a subject gallery should include a sufficient amount of possible variations, in order to allow correct recognition in different situations. We call this gallery feature representativeness. A significant feature to consider together with quality is the sufficient representativeness of (each) subject’s gallery. A strategy to address this problem is to investigate the role of the entropy, which is computed over a set of samples of a same subject. The paper will present a number of applications of such a measure in handling the galleries of the different users who are registered in a system. The resulting criteria might also guide template updating, to assure gallery representativeness over time

    Multimodal Emotion Recognition via Convolutional Neural Networks: Comparison of different strategies on two multimodal datasets

    Get PDF
    The aim of this paper is to investigate emotion recognition using a multimodal approach that exploits convolutional neural networks (CNNs) with multiple input. Multimodal approaches allow different modalities to cooperate in order to achieve generally better performances because different features are extracted from different pieces of information. In this work, the facial frames, the optical flow computed from consecutive facial frames, and the Mel Spectrograms (from the word melody) are extracted from videos and combined together in different ways to understand which modality combination works better. Several experiments are run on the models by first considering one modality at a time so that good accuracy results are found on each modality. Afterward, the models are concatenated to create a final model that allows multiple inputs. For the experiments the datasets used are BAUM-1 ((Bahçeşehir University Multimodal Affective Database - 1) and RAVDESS (Ryerson Audio–Visual Database of Emotional Speech and Song), which both collect two distinguished sets of videos based on the different intensity of the expression, that is acted/strong or spontaneous/normal, providing the representations of the following emotional states that will be taken into consideration: angry, disgust, fearful, happy and sad. The performances of the proposed models are shown through accuracy results and some confusion matrices, demonstrating better accuracy than the compared proposals in the literature. The best accuracy achieved on BAUM-1 dataset is about 95%, while on RAVDESS it is about 95.5%

    Inflated 3D ConvNet context analysis for violence detection

    Get PDF
    According to the Wall Street Journal, one billion surveillance cameras will be deployed around the world by 2021. This amount of information can be hardly managed by humans. Using a Inflated 3D ConvNet as backbone, this paper introduces a novel automatic violence detection approach that outperforms state-of-the-art existing proposals. Most of those proposals consider a pre-processing step to only focus on some regions of interest in the scene, i.e., those actually containing a human subject. In this regard, this paper also reports the results of an extensive analysis on whether and how the context can affect or not the adopted classifier performance. The experiments show that context-free footage yields substantial deterioration of the classifier performance (2% to 5%) on publicly available datasets. However, they also demonstrate that performance stabilizes in context-free settings, no matter the level of context restriction applied. Finally, a cross-dataset experiment investigates the generalizability of results obtained in a single-collection experiment (same dataset used for training and testing) to cross-collection settings (different datasets used for training and testing)

    Normal Maps vs. Visible Images: Comparing Classifiers and Combining Modalities

    Get PDF
    This work investigates face recognition based on normal maps, and the performance improvement that can be obtained when exploiting it within a multimodal system, where a further independent module processes visible images. We first propose a technique to align two 3D models of a face by means of normal maps, which is very fast while providing an accuracy comparable to well-known and more general techniques such as Iterative Closest Point (ICP). Moreover, we propose a matching criterion based on a technique which exploits difference maps. It does not reduce the dimension of the feature space, but performs a weighted matching between two normal maps. In the second place, we explore the range of performance soffered by different linear and non linear classifiers, when applied to the normal maps generated from the above aligned models. Such experiments highlight the added value of chromatic information contained in normal maps. We analyse a solid list of classifiers which we reselected due to their historical reference value (e.g. Principal Component Analysis) or to their good performances in the bidimensional setting (Linear Discriminant Analysis, Partitioned Iterated Function Systems). Last but not least, we perform experiments to measure how different ways of combining normal maps and visible images can enhance the results obtained by the single recognition systems, given that specific characteristics of the images are taken into account. For these last experiments we only consider the classifier giving the best average results in the preceding ones, namely the PIFS-based one

    GHItaly19: Research Perspectives on Game Human Interaction

    Get PDF
    This is a short introduction to the papers presented at the 3rd Workshop on Games-Human Interaction - GHItaly19, that was held in connection with CHItaly 2019. This series of workshops focuses on the multifaceted issues related to the design and development of human-game interfaces. This entails multidisciplinary competences and skills, and the final quality of the User eXperience depends on how consistently and smartly they are exploited. As a matter of fact, users\u2019 engagement and satisfaction rely on the wise design and skilled evaluation of the produced (multidimensional) artifacts. This gains even more critical importance since the application of video games has long overcome the borders of amusement, to spur new possibilities for, e.g., continuous healthcare and education

    GHItaly19: Research Perspectives on Game Human Interaction

    Get PDF
    This is a short introduction to the papers presented at the 3rd Workshop on Games-Human Interaction - GHItaly19, that was held in connection with CHItaly 2019. This series of workshops focuses on the multifaceted issues related to the design and development of human-game interfaces. This entails multidisciplinary competences and skills, and the final quality of the User eXperience depends on how consistently and smartly they are exploited. As a matter of fact, users’ engagement and satisfaction rely on the wise design and skilled evaluation of the produced (multidimensional) artifacts. This gains even more critical importance since the application of video games has long overcome the borders of amusement, to spur new possibilities for, e.g., continuous healthcare and education

    Signal enhancement and efficient DTW-based comparison for wearable gait recognition

    Get PDF
    The popularity of biometrics-based user identification has significantly increased over the last few years. User identification based on the face, fingerprints, and iris, usually achieves very high accuracy only in controlled setups and can be vulnerable to presentation attacks, spoofing, and forgeries. To overcome these issues, this work proposes a novel strategy based on a relatively less explored biometric trait, i.e., gait, collected by a smartphone accelerometer, which can be more robust to the attacks mentioned above. According to the wearable sensor-based gait recognition state-of-the-art, two main classes of approaches exist: 1) those based on machine and deep learning; 2) those exploiting hand-crafted features. While the former approaches can reach a higher accuracy, they suffer from problems like, e.g., performing poorly outside the training data, i.e., lack of generalizability. This paper proposes an algorithm based on hand-crafted features for gait recognition that can outperform the existing machine and deep learning approaches. It leverages a modified Majority Voting scheme applied to Fast Window Dynamic Time Warping, a modified version of the Dynamic Time Warping (DTW) algorithm with relaxed constraints and majority voting, to recognize gait patterns. We tested our approach named MV-FWDTW on the ZJU-gaitacc, one of the most extensive datasets for the number of subjects, but especially for the number of walks per subject and walk lengths. Results set a new state-of-the-art gait recognition rate of 98.82% in a cross-session experimental setup. We also confirm the quality of the proposed method using a subset of the OU-ISIR dataset, another large state-of-the-art benchmark with more subjects but much shorter walk signals
    • …
    corecore