711 research outputs found
A proposal to improve the authentication process in m-health environments
Special Section: Mission Critical Public-Safety Communications: Architectures, Enabling Technologies, and Future Applications
One of the challenges of mobile health is to provide a way of maintaining privacy in the access to the data. Especially, when using ICT for providing access to health services and information. In these scenarios, it is essential to determine and verify the identity of users to ensure the security of the network. A way of authenticating the identity of each patient, doctor or any stakeholder involved in the process is to use a software application that analyzes the face of them through the cams integrated in their devices. The selection of an appropriate facial authentication software application requires a fair comparison between alternatives through a common database of face images. Users usually carry out authentication with variations in their aspects while accessing to health services. This paper presents both 1) a database of facial images that combines the most common variations that can happen in the participants and 2) an algorithm that establishes different levels of access to the data based on data sensitivity levels and the accuracy of the authentication
Biometrics: Facial Recognition
Biometrics, a biological measurement and refers to the automatic identification of a
person based on his or her physiological or psychological characteristics. This project
defines biometrics and its application in the real world focusing the study on facial
recognition. Facial recognition is the identification of an individual based on the facial
data characteristics such as facial features and face position. The objective of this project
is to develop a program which could verify a face when compared to a database of
known faces by using MATLAB. This project also explains the details of facial
recognition especially on the four main facial recognition categories and other
components related in characterizing a face. It also lists the various approaches in
handling facial recognition where we see different methods applied and opinions on
which method is better and what factors influenced them. Among the various
approaches, eigenface technique is explained in detail including its work procedure,
algorithm and the tools applied. Main part of this project discussed on the project
findings. The result and output produced is elaborated according to the sequence of the
program developed where many face images displayed. Finally, this project reviews the
relevancy of the study contents with the objectives and listed out some recommendation
for further work expansion
Movement imitation mechanisms in robots and humans
Imitation mechanisms in artificial and biological agents are of great interest mainly
for two reasons: from the engineering point of view, they allow the agent to efficiently
utilise the knowledge of other agents in its social environment in order to quickly learn
how to perform new tasks; from the scientific point of view, these mechanisms are in¬
triguing since they require the integration of information from the visual, memory, and
motor systems. This thesis presents a dual-route architecture for movement imitation
and considers its plausibility as a computational model of primate movement imitation
mechanisms.The developed architecture consists of two routes, termed passive and active. The
active route tightly couples behaviour perception and generation: in order to perceive
a demonstrated behaviour, the motor behaviours already in the imitator's repertoire
are utilised. While the demonstration is unfolding, these behaviours are executed on
internal forward models, and predictions are generated with respect to what the next
state of the demonstrator will be. Behaviours are reinforced based on the accuracy of
these predictions. Imitation amounts to selecting the behaviour that performed best,
and re-enacting that behaviour. If none of the existing behaviours performs adequately,
control is passed to the passive route, which extracts the representative postures that
describe the demonstrated behaviour, and imitates it by sequentially going through the
extracted postures. Demonstrated behaviours imitated through the passive route form
the basis for acquiring new behaviours, which are added to the repertoire available
to the active route. A stereo vision robotic head, and a dynamically simulated 13
DoF articulated robot are utilised in order to implement this architecture, illustrate
its behavioural characteristics, and investigate its capabilities and limitations. The
experiments show the architecture being capable of imitating and learning a variety
of head and arm movements, while they highlight its inability to perceive a behaviour
that is in the imitator's repertoire, if the behaviour is demonstrated with execution
parameters (for example, speed) unattainable by the imitator.This thesis also proposes this architecture as a computational model of primate move¬
ment imitation mechanisms. The behavioural characteristics of the architecture are
compared with biological data available on monkey and human imitation mechanisms.
The behaviour of the active route correlates favourably with brain activation data,
both at the neuronal level (monkey's F5 'mirror neurons'), and at the systems level
(human PET and MEP data that demonstrate activation of motor areas during ac¬
tion observation and imagination). The limitations of the architecture that surfaced
during the computational experiments lead to testable predictions regarding the beha¬
viour of mirror neurons. The passive route is a computational implementation of an
intermodal-matching mechanism, that has been hypothesised to underlie early infant
movement imitation (the AIM hypothesis). Destroying the passive route leads to the
architecture being unable to imitate any novel behaviours, but retaining its ability to
imitate known ones. This characteristic correlates favourably with the symptoms dis¬
played by humans suffering from visuo-imitative apraxia. Finally, dealing with novel
vs. known behaviours through separate routes correlates favourably with human brain
activation (PET) data which show that the pattern of activation differs according to
whether the observed action is meaningful or not to the observer
Face Image and Video Analysis in Biometrics and Health Applications
Computer Vision (CV) enables computers and systems to derive meaningful information from acquired visual inputs, such as images and videos, and make decisions based on the extracted information. Its goal is to acquire, process, analyze, and understand the information by developing a theoretical and algorithmic model. Biometrics are distinctive and measurable human characteristics used to label or describe individuals by combining computer vision with knowledge of human physiology (e.g., face, iris, fingerprint) and behavior (e.g., gait, gaze, voice). Face is one of the most informative biometric traits. Many studies have investigated the human face from the perspectives of various different disciplines, ranging from computer vision, deep learning, to neuroscience and biometrics. In this work, we analyze the face characteristics from digital images and videos in the areas of morphing attack and defense, and autism diagnosis. For face morphing attacks generation, we proposed a transformer based generative adversarial network to generate more visually realistic morphing attacks by combining different losses, such as face matching distance, facial landmark based loss, perceptual loss and pixel-wise mean square error. In face morphing attack detection study, we designed a fusion-based few-shot learning (FSL) method to learn discriminative features from face images for few-shot morphing attack detection (FS-MAD), and extend the current binary detection into multiclass classification, namely, few-shot morphing attack fingerprinting (FS-MAF). In the autism diagnosis study, we developed a discriminative few shot learning method to analyze hour-long video data and explored the fusion of facial dynamics for facial trait classification of autism spectrum disorder (ASD) in three severity levels. The results show outstanding performance of the proposed fusion-based few-shot framework on the dataset. Besides, we further explored the possibility of performing face micro- expression spotting and feature analysis on autism video data to classify ASD and control groups. The results indicate the effectiveness of subtle facial expression changes on autism diagnosis
Generating One Biometric Feature from Another: Faces from Fingerprints
This study presents a new approach based on artificial neural networks for generating one biometric feature (faces) from another (only fingerprints). An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Authorization and authentication strategy for mobile highly constrained edge devices
The rising popularity of mobile devices has driven the need for faster connection speeds and more flexible authentication and authorization methods. This project aims to develop and implement an innovative system that provides authentication and authorization for both the device and the user. It also facilitates real-time user re-authentication within the application, ensuring transparency throughout the process. Additionally, the system aims to establish a secure architecture that minimizes the computational requirements on the client's device, thus optimizing the device's battery life. The achieved results have demonstrated satisfactory outcomes, validating the effectiveness of the proposed solution. However, there is still potential for further improvement to enhance its overall performance
Non-Intrusive Affective Assessment in the Circumplex Model from Pupil Diameter and Facial Expression Monitoring
Automatic methods for affective assessment seek to enable computer systems to recognize the affective state of their users. This dissertation proposes a system that uses non-intrusive measurements of the user’s pupil diameter and facial expression to characterize his /her affective state in the Circumplex Model of Affect. This affective characterization is achieved by estimating the affective arousal and valence of the user’s affective state.
In the proposed system the pupil diameter signal is obtained from a desktop eye gaze tracker, while the face expression components, called Facial Animation Parameters (FAPs) are obtained from a Microsoft Kinect module, which also captures the face surface as a cloud of points. Both types of data are recorded 10 times per second. This dissertation implemented pre-processing methods and fixture extraction approaches that yield a reduced number of features representative of discrete 10-second recordings, to estimate the level of affective arousal and the type of affective valence experienced by the user in those intervals.
The dissertation uses a machine learning approach, specifically Support Vector Machines (SVMs), to act as a model that will yield estimations of valence and arousal from the features derived from the data recorded.
Pupil diameter and facial expression recordings were collected from 50 subjects who volunteered to participate in an FIU IRB-approved experiment to capture their reactions to the presentation of 70 pictures from the International Affective Picture System (IAPS) database, which have been used in large calibration studies and therefore have associated arousal and valence mean values. Additionally, each of the 50 volunteers in the data collection experiment provided their own subjective assessment of the levels of arousal and valence elicited in him / her by each picture. This process resulted in a set of face and pupil data records, along with the expected reaction levels of arousal and valence, i.e., the “labels”, for the data used to train and test the SVM classifiers.
The trained SVM classifiers achieved 75% accuracy for valence estimation and 92% accuracy in arousal estimation, confirming the initial viability of non-intrusive affective assessment systems based on pupil diameter and face expression monitoring
- …