10,359 research outputs found
Inference and Evaluation of the Multinomial Mixture Model for Text Clustering
In this article, we investigate the use of a probabilistic model for
unsupervised clustering in text collections. Unsupervised clustering has become
a basic module for many intelligent text processing applications, such as
information retrieval, text classification or information extraction. The model
considered in this contribution consists of a mixture of multinomial
distributions over the word counts, each component corresponding to a different
theme. We present and contrast various estimation procedures, which apply both
in supervised and unsupervised contexts. In supervised learning, this work
suggests a criterion for evaluating the posterior odds of new documents which
is more statistically sound than the "naive Bayes" approach. In an unsupervised
context, we propose measures to set up a systematic evaluation framework and
start with examining the Expectation-Maximization (EM) algorithm as the basic
tool for inference. We discuss the importance of initialization and the
influence of other features such as the smoothing strategy or the size of the
vocabulary, thereby illustrating the difficulties incurred by the high
dimensionality of the parameter space. We also propose a heuristic algorithm
based on iterative EM with vocabulary reduction to solve this problem. Using
the fact that the latent variables can be analytically integrated out, we
finally show that Gibbs sampling algorithm is tractable and compares favorably
to the basic expectation maximization approach
Exploiting Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection
Background: Voice disorders affect patients profoundly, and acoustic tools can potentially measure voice function objectively. Disordered sustained vowels exhibit wide-ranging phenomena, from nearly periodic to highly complex, aperiodic vibrations, and increased "breathiness". Modelling and surrogate data studies have shown significant nonlinear and non-Gaussian random properties in these sounds. Nonetheless, existing tools are limited to analysing voices displaying near periodicity, and do not account for this inherent biophysical nonlinearity and non-Gaussian randomness, often using linear signal processing methods insensitive to these properties. They do not directly measure the two main biophysical symptoms of disorder: complex nonlinear aperiodicity, and turbulent, aeroacoustic, non-Gaussian randomness. Often these tools cannot be applied to more severe disordered voices, limiting their clinical usefulness.

Methods: This paper introduces two new tools to speech analysis: recurrence and fractal scaling, which overcome the range limitations of existing tools by addressing directly these two symptoms of disorder, together reproducing a "hoarseness" diagram. A simple bootstrapped classifier then uses these two features to distinguish normal from disordered voices.

Results: On a large database of subjects with a wide variety of voice disorders, these new techniques can distinguish normal from disordered cases, using quadratic discriminant analysis, to overall correct classification performance of 91.8% plus or minus 2.0%. The true positive classification performance is 95.4% plus or minus 3.2%, and the true negative performance is 91.5% plus or minus 2.3% (95% confidence). This is shown to outperform all combinations of the most popular classical tools.

Conclusions: Given the very large number of arbitrary parameters and computational complexity of existing techniques, these new techniques are far simpler and yet achieve clinically useful classification performance using only a basic classification technique. They do so by exploiting the inherent nonlinearity and turbulent randomness in disordered voice signals. They are widely applicable to the whole range of disordered voice phenomena by design. These new measures could therefore be used for a variety of practical clinical purposes.

Making AI Meaningful Again
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy
Accurate and reliable segmentation of the optic disc in digital fundus images
We describe a complete pipeline for the detection and accurate automatic segmentation of the optic disc in digital fundus images. This procedure provides separation of vascular information and accurate inpainting of vessel-removed images, symmetry-based optic disc localization, and fitting of incrementally complex contour models at increasing resolutions using information related to inpainted images and vessel masks. Validation experiments, performed on a large dataset of images of healthy and pathological eyes, annotated by experts and partially graded with a quality label, demonstrate the good performances of the proposed approach. The method is able to detect the optic disc and trace its contours better than the other systems presented in the literature and tested on the same data. The average error in the obtained contour masks is reasonably close to the interoperator errors and suitable for practical applications. The optic disc segmentation pipeline is currently integrated in a complete software suite for the semiautomatic quantification of retinal vessel properties from fundus camera images (VAMPIRE)
Recommended from our members
Interactive Prediction and Planning for Autonomous Driving: from Algorithms to Fundamental Aspects
Inevitably, autonomous vehicles need to interact with other road participants in a variety of highly complex or critical driving scenarios. It is still an extremely challenging task even for the forefront companies or institutes to enable autonomous vehicles to interactively predict the behavior of others, and plan safe and high-quality motions accordingly. The major obstacles are not just originated from prediction and planning algorithms with insufficient performances. Several fundamental problems in the fields of interactive prediction and planning still remain open, such as formulation, representation and evaluation of interactive prediction methods, motion dataset with densely interactive driving behavior, as well as interface of interactive prediction and planning algorithms. The aforementioned fundamental aspects of interactive prediction and planning are addressed in this dissertation along with various kinds of algorithms. First, generic environmental representation for various scenarios with topological decomposition is constructed, and a corresponding planning algorithm is designed by combining graph search and optimization. Hard constraints in optimization-based planners are also incorporated into the training loss of imitation learning so that the policy net can generate safe and feasible motions in highly constrained scenarios. Unified problem formulation and motion representation are designed for different paradigms of interactive predictors such as planning-based prediction (inverse reinforcement learning), as well as probabilistic graphical models (hidden Markov model) and deep neural networks (mixture density network), which are utilized for the prediction/planning interface design and prediction benchmark. A framework combing decision network and graph-search/optimization/sample-based planner is proposed to achieve a driving strategy which is defensive to potential violations of others, but not overly conservatively to threats of low probabilities. Such driving strategy is achieved via experiments based on the aforementioned interactive prediction and planning algorithms with proper interface designed. These predictors are also evaluated from closed loop perspective considering planning fatality when using the prediction results instead of pure data approximation metrics. Finally, INTERACTION (INTERnational, Adversarial and Cooperative moTION) dataset with highly interactive driving scenarios and behavior from international locations is constructed with interaction density metric defined to compare different datasets. The dataset has been utilized for various behavior-related research areas such as prediction, planning, imitation learning and behavior modeling, and is inspiring new research fields such as representation learning, interaction extraction and scenario generation
Modeling Model Uncertainty
Recently there has been a great deal of interest in studying monetary policy under model uncertainty. We point out that different assumptions about the uncertainty may result in drastically different robust' policy recommendations. Therefore, we develop new methods to analyze uncertainty about the parameters of a model, the lag specification, the serial correlation of shocks, and the effects of real time data in one coherent structure. We consider both parametric and nonparametric specifications of this structure and use them to estimate the uncertainty in a small model of the US economy. We then use our estimates to compute robust Bayesian and minimax monetary policy rules, which are designed to perform well in the face of uncertainty. Our results suggest that the aggressiveness recently found in robust policy rules is likely to be caused by overemphasizing uncertainty about economic dynamics at low frequencies.
- …