17,560 research outputs found
EEG-based cognitive control behaviour assessment: an ecological study with professional air traffic controllers
Several models defining different types of cognitive human behaviour are available. For this work, we
have selected the Skill, Rule and Knowledge (SRK) model proposed by Rasmussen in 1983. This model
is currently broadly used in safety critical domains, such as the aviation. Nowadays, there are no tools
able to assess at which level of cognitive control the operator is dealing with the considered task, that
is if he/she is performing the task as an automated routine (skill level), as procedures-based activity
(rule level), or as a problem-solving process (knowledge level). Several studies tried to model the SRK
behaviours from a Human Factor perspective. Despite such studies, there are no evidences in which such
behaviours have been evaluated from a neurophysiological point of view, for example, by considering
brain activity variations across the different SRK levels. Therefore, the proposed study aimed to
investigate the use of neurophysiological signals to assess the cognitive control behaviours accordingly
to the SRK taxonomy. The results of the study, performed on 37 professional Air Traffic Controllers,
demonstrated that specific brain features could characterize and discriminate the different SRK levels,
therefore enabling an objective assessment of the degree of cognitive control behaviours in realistic
setting
Sparse Randomized Kaczmarz for Support Recovery of Jointly Sparse Corrupted Multiple Measurement Vectors
While single measurement vector (SMV) models have been widely studied in
signal processing, there is a surging interest in addressing the multiple
measurement vectors (MMV) problem. In the MMV setting, more than one
measurement vector is available and the multiple signals to be recovered share
some commonalities such as a common support. Applications in which MMV is a
naturally occurring phenomenon include online streaming, medical imaging, and
video recovery. This work presents a stochastic iterative algorithm for the
support recovery of jointly sparse corrupted MMV. We present a variant of the
Sparse Randomized Kaczmarz algorithm for corrupted MMV and compare our proposed
method with an existing Kaczmarz type algorithm for MMV problems. We also
showcase the usefulness of our approach in the online (streaming) setting and
provide empirical evidence that suggests the robustness of the proposed method
to the distribution of the corruption and the number of corruptions occurring.Comment: 13 pages, 6 figure
Extension of Sparse Randomized Kaczmarz Algorithm for Multiple Measurement Vectors
The Kaczmarz algorithm is popular for iteratively solving an overdetermined
system of linear equations. The traditional Kaczmarz algorithm can approximate
the solution in few sweeps through the equations but a randomized version of
the Kaczmarz algorithm was shown to converge exponentially and independent of
number of equations. Recently an algorithm for finding sparse solution to a
linear system of equations has been proposed based on weighted randomized
Kaczmarz algorithm. These algorithms solves single measurement vector problem;
however there are applications were multiple-measurements are available. In
this work, the objective is to solve a multiple measurement vector problem with
common sparse support by modifying the randomized Kaczmarz algorithm. We have
also modeled the problem of face recognition from video as the multiple
measurement vector problem and solved using our proposed technique. We have
compared the proposed algorithm with state-of-art spectral projected gradient
algorithm for multiple measurement vectors on both real and synthetic datasets.
The Monte Carlo simulations confirms that our proposed algorithm have better
recovery and convergence rate than the MMV version of spectral projected
gradient algorithm under fairness constraints
Segregated Runge–Kutta time integration of convection-stabilized mixed finite element schemes for wall-unresolved LES of incompressible flows
In this work, we develop a high-performance numerical framework for the large eddy simulation (LES) of incompressible flows. The spatial discretization of the nonlinear system is carried out using mixed finite element (FE) schemes supplemented with symmetric projection stabilization of the convective term and a penalty term for the divergence constraint. These additional terms introduced at the discrete level have been proved to act as implicit LES models. In order to perform meaningful wall-unresolved simulations, we consider a weak imposition of the boundary conditions using a Nitsche’s-type scheme, where the tangential component penalty term is designed to act as a wall law. Next, segregated Runge–Kutta (SRK) schemes (recently proposed by the authors for laminar flow problems) are applied to the LES simulation of turbulent flows. By the introduction of a penalty term on the trace of the acceleration, these methods exhibit excellent stability properties for both implicit and explicit treatment of the convective terms. SRK schemes are excellent for large-scale simulations, since they reduce the computational cost of the linear system solves by splitting velocity and pressure computations at the time integration level, leading to two uncoupled systems. The pressure system is a Darcy-type problem that can easily be preconditioned using a traditional block-preconditioning scheme that only requires a Poisson solver. At the end, only coercive systems have to be solved, which can be effectively preconditioned by multilevel domain decomposition schemes, which are both optimal and scalable. The framework is applied to the Taylor–Green and turbulent channel flow benchmarks in order to prove the accuracy of the convection-stabilized mixed FEs as LES models and SRK time integrators. The scalability of the preconditioning techniques (in space only) has also been proven for one step of the SRK scheme for the Taylor–Green flow using uniform meshes. Moreover, a turbulent flow around a NACA profile is solved to show the applicability of the proposed algorithms for a realistic problem.Peer ReviewedPostprint (author's final draft
Inference of Markovian Properties of Molecular Sequences from NGS Data and Applications to Comparative Genomics
Next Generation Sequencing (NGS) technologies generate large amounts of short
read data for many different organisms. The fact that NGS reads are generally
short makes it challenging to assemble the reads and reconstruct the original
genome sequence. For clustering genomes using such NGS data, word-count based
alignment-free sequence comparison is a promising approach, but for this
approach, the underlying expected word counts are essential.
A plausible model for this underlying distribution of word counts is given
through modelling the DNA sequence as a Markov chain (MC). For single long
sequences, efficient statistics are available to estimate the order of MCs and
the transition probability matrix for the sequences. As NGS data do not provide
a single long sequence, inference methods on Markovian properties of sequences
based on single long sequences cannot be directly used for NGS short read data.
Here we derive a normal approximation for such word counts. We also show that
the traditional Chi-square statistic has an approximate gamma distribution,
using the Lander-Waterman model for physical mapping. We propose several
methods to estimate the order of the MC based on NGS reads and evaluate them
using simulations. We illustrate the applications of our results by clustering
genomic sequences of several vertebrate and tree species based on NGS reads
using alignment-free sequence dissimilarity measures. We find that the
estimated order of the MC has a considerable effect on the clustering results,
and that the clustering results that use a MC of the estimated order give a
plausible clustering of the species.Comment: accepted by RECOMB-SEQ 201
Measuring Similarity in Large-Scale Folksonomies
Social (or folksonomic) tagging has become a very popular way to describe content within Web 2.0 websites. Unlike\ud
taxonomies, which overimpose a hierarchical categorisation of content, folksonomies enable end-users to freely create and choose the categories (in this case, tags) that best\ud
describe some content. However, as tags are informally de-\ud
fined, continually changing, and ungoverned, social tagging\ud
has often been criticised for lowering, rather than increasing, the efficiency of searching, due to the number of synonyms, homonyms, polysemy, as well as the heterogeneity of\ud
users and the noise they introduce. To address this issue, a\ud
variety of approaches have been proposed that recommend\ud
users what tags to use, both when labelling and when looking for resources. As we illustrate in this paper, real world\ud
folksonomies are characterized by power law distributions\ud
of tags, over which commonly used similarity metrics, including the Jaccard coefficient and the cosine similarity, fail\ud
to compute. We thus propose a novel metric, specifically\ud
developed to capture similarity in large-scale folksonomies,\ud
that is based on a mutual reinforcement principle: that is,\ud
two tags are deemed similar if they have been associated to\ud
similar resources, and vice-versa two resources are deemed\ud
similar if they have been labelled by similar tags. We offer an efficient realisation of this similarity metric, and assess its quality experimentally, by comparing it against cosine similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and CiteULike
- …
