87,698 research outputs found
Keystroke dynamics in the pre-touchscreen era
Biometric authentication seeks to measure an individualās unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individualsā typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts
Recommended from our members
Always on my mind: Cross-brain associations of mental health symptoms during simultaneous parent-child scanning.
How parents manifest symptoms of anxiety or depression may affect how children learn to modulate their own distress, thereby influencing the children's risk for developing an anxiety or mood disorder. Conversely, children's mental health symptoms may impact parents' experiences of negative emotions. Therefore, mental health symptoms can have bidirectional effects in parent-child relationships, particularly during moments of distress or frustration (e.g., when a parent or child makes a costly mistake). The present study used simultaneous functional magnetic resonance imaging (fMRI) of parent-adolescent dyads to examine how brain activity when responding to each other's costly errors (i.e., dyadic error processing) may be associated with symptoms of anxiety and depression. While undergoing simultaneous fMRI scans, healthy dyads completed a task involving feigned errors that indicated their family member made a costly mistake. Inter-brain, random-effects multivariate modeling revealed that parents who exhibited decreased medial prefrontal cortex and posterior cingulate cortex activation when viewing their child's costly error response had children with more symptoms of depression and anxiety. Adolescents with increased anterior insula activation when viewing a costly error made by their parent had more anxious parents. These results reveal cross-brain associations between mental health symptomatology and brain activity during parent-child dyadic error processing
Modeling sparse connectivity between underlying brain sources for EEG/MEG
We propose a novel technique to assess functional brain connectivity in
EEG/MEG signals. Our method, called Sparsely-Connected Sources Analysis (SCSA),
can overcome the problem of volume conduction by modeling neural data
innovatively with the following ingredients: (a) the EEG is assumed to be a
linear mixture of correlated sources following a multivariate autoregressive
(MVAR) model, (b) the demixing is estimated jointly with the source MVAR
parameters, (c) overfitting is avoided by using the Group Lasso penalty. This
approach allows to extract the appropriate level cross-talk between the
extracted sources and in this manner we obtain a sparse data-driven model of
functional connectivity. We demonstrate the usefulness of SCSA with simulated
data, and compare to a number of existing algorithms with excellent results.Comment: 9 pages, 6 figure
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks
It is desirable to train convolutional networks (CNNs) to run more
efficiently during inference. In many cases however, the computational budget
that the system has for inference cannot be known beforehand during training,
or the inference budget is dependent on the changing real-time resource
availability. Thus, it is inadequate to train just inference-efficient CNNs,
whose inference costs are not adjustable and cannot adapt to varied inference
budgets. We propose a novel approach for cost-adjustable inference in CNNs -
Stochastic Downsampling Point (SDPoint). During training, SDPoint applies
feature map downsampling to a random point in the layer hierarchy, with a
random downsampling ratio. The different stochastic downsampling configurations
known as SDPoint instances (of the same model) have computational costs
different from each other, while being trained to minimize the same prediction
loss. Sharing network parameters across different instances provides
significant regularization boost. During inference, one may handpick a SDPoint
instance that best fits the inference budget. The effectiveness of SDPoint, as
both a cost-adjustable inference approach and a regularizer, is validated
through extensive experiments on image classification
FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks
Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
- ā¦