8,833 research outputs found
An evaluation of a three-modal hand-based database to forensic-based gender recognition
In recent years, behavioural soft-biometrics have been widely used to
improve biometric systems performance. Information like gender, age and ethnicity can be obtained from more than one behavioural modality. In this paper,
we propose a multimodal hand-based behavioural database for gender recognition. Thus, our goal in this paper is to evaluate the performance of the multimodal database. For this, the experiment was realised with 76 users and was
collected keyboard dynamics, touchscreen dynamics and handwritten signature
data. Our approach consists of compare two-modal and one-modal modalities
of the biometric data with the multimodal database. Traditional and new classifiers were used and the statistical Kruskal-Wallis to analyse the accuracy of the
databases. The results showed that the multimodal database outperforms the
other databases
Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling
In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods
Vision systems with the human in the loop
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed
Recommended from our members
A novel word-independent gesture-typing continuous authentication scheme for mobile devices
In this study, we produce a new continuous authentication scheme for gesture-typing on mobile devices. Our scheme is the first scheme that authenticates gesture-typing interactions in a word-independent format. The scheme relies on groupings of features extracted from the word gesture after it has been reduced to parts common to all gestures. We show that movement sensors are also important in differentiating between users. We describe the feature extraction processes and analyse our proposed feature set. The unique process of our authentication scheme is presented and described. We collect our own gesture typing dataset including data collected during sitting, standing and walking activities for realism. We test our features against state-of-the-art touch-screen interaction features and compare feature extraction times on real mobile devices. Our scheme authenticates users with an equal error rate of 3.58% for a single word-gesture. The equal error rate is reduced to 0.81% when 3 word-gestures are used to authenticate
- ā¦