12,456 research outputs found
Mixing and non-mixing local minima of the entropy contrast for blind source separation
In this paper, both non-mixing and mixing local minima of the entropy are
analyzed from the viewpoint of blind source separation (BSS); they correspond
respectively to acceptable and spurious solutions of the BSS problem. The
contribution of this work is twofold. First, a Taylor development is used to
show that the \textit{exact} output entropy cost function has a non-mixing
minimum when this output is proportional to \textit{any} of the non-Gaussian
sources, and not only when the output is proportional to the lowest entropic
source. Second, in order to prove that mixing entropy minima exist when the
source densities are strongly multimodal, an entropy approximator is proposed.
The latter has the major advantage that an error bound can be provided. Even if
this approximator (and the associated bound) is used here in the BSS context,
it can be applied for estimating the entropy of any random variable with
multimodal density.Comment: 11 pages, 6 figures, To appear in IEEE Transactions on Information
Theor
An Invariance Principle for Maintaining the Operating Point of a Neuron
Sensory neurons adapt to changes in the natural statistics of their environments through processes such as gain control and firing threshold adjustment. It has been argued that neurons early in sensory pathways adapt according to information-theoretic criteria, perhaps maximising their coding efficiency or information rate. Here, we draw a distinction between how a neuron’s preferred operating point is determined and how its preferred operating point is maintained through adaptation. We propose that a neuron’s preferred operating point can be characterised by the probability density function (PDF) of its output spike rate, and that adaptation maintains an invariant output PDF, regardless of how this output PDF is initially set. Considering a sigmoidal transfer function for simplicity, we derive simple adaptation rules for a neuron with one sensory input that permit adaptation to the lower-order statistics of the input, independent of how the preferred operating point of the neuron is set. Thus, if the preferred operating point is, in fact, set according to information-theoretic criteria, then these rules nonetheless maintain a neuron at that point. Our approach generalises from the unimodal case to the multimodal case, for a neuron with inputs from distinct sensory channels, and we briefly consider this case too
Promoting Intermodal Connectivity at California’s High Speed Rail Stations
High-speed rail (HSR) has emerged as one of the most revolutionary and transformative transportation technologies, having a profound impact on urban-regional accessibility and inter-city travel across Europe, Japan, and more recently China and other Asian countries. One of HSR’s biggest advantages over air travel is that it offers passengers a one-seat ride into the center of major cities, eliminating time-consuming airport transfers and wait times, and providing ample opportunities for intermodal transfers at these locales. Thus, HSR passengers are typically able to arrive at stations that are only a short walk away from central business districts and major tourist attractions, without experiencing any of the stress that car drivers often experience in negotiating such highly congested environments. Such an approach requires a high level of coordination and planning of the infrastructural and spatial aspects of the HSR service, and a high degree of intermodal connectivity. But what key elements can help the US high-speed rail system blend successfully with other existing rail and transit services? That question is critically important now that high-speed rail is under construction in California. The study seeks to understand the requirements for high levels of connectivity and spatial and operational integration of HSR stations and offer recommendations for seamless, and convenient integrated service in California intercity rail/HSR stations. The study draws data from a review of the literature on the connectivity, intermodality, and spatial and operational integration of transit systems; a survey of 26 high-speed rail experts from six different European countries; and an in-depth look of the German and Spanish HSR systems and some of their stations, which are deemed as exemplary models of station connectivity. The study offers recommendations on how to enhance both the spatial and the operational connectivity of high-speed rail systems giving emphasis on four spatial zones: the station, the station neighborhood, the municipality at large, and the region
Groupwise Multimodal Image Registration using Joint Total Variation
In medical imaging it is common practice to acquire a wide range of
modalities (MRI, CT, PET, etc.), to highlight different structures or
pathologies. As patient movement between scans or scanning session is
unavoidable, registration is often an essential step before any subsequent
image analysis. In this paper, we introduce a cost function based on joint
total variation for such multimodal image registration. This cost function has
the advantage of enabling principled, groupwise alignment of multiple images,
whilst being insensitive to strong intensity non-uniformities. We evaluate our
algorithm on rigidly aligning both simulated and real 3D brain scans. This
validation shows robustness to strong intensity non-uniformities and low
registration errors for CT/PET to MRI alignment. Our implementation is publicly
available at https://github.com/brudfors/coregistration-njtv
Bayesian Methods for Analysis and Adaptive Scheduling of Exoplanet Observations
We describe work in progress by a collaboration of astronomers and
statisticians developing a suite of Bayesian data analysis tools for extrasolar
planet (exoplanet) detection, planetary orbit estimation, and adaptive
scheduling of observations. Our work addresses analysis of stellar reflex
motion data, where a planet is detected by observing the "wobble" of its host
star as it responds to the gravitational tug of the orbiting planet. Newtonian
mechanics specifies an analytical model for the resulting time series, but it
is strongly nonlinear, yielding complex, multimodal likelihood functions; it is
even more complex when multiple planets are present. The parameter spaces range
in size from few-dimensional to dozens of dimensions, depending on the number
of planets in the system, and the type of motion measured (line-of-sight
velocity, or position on the sky). Since orbits are periodic, Bayesian
generalizations of periodogram methods facilitate the analysis. This relies on
the model being linearly separable, enabling partial analytical
marginalization, reducing the dimension of the parameter space. Subsequent
analysis uses adaptive Markov chain Monte Carlo methods and adaptive importance
sampling to perform the integrals required for both inference (planet detection
and orbit measurement), and information-maximizing sequential design (for
adaptive scheduling of observations). We present an overview of our current
techniques and highlight directions being explored by ongoing research.Comment: 29 pages, 11 figures. An abridged version is accepted for publication
in Statistical Methodology for a special issue on astrostatistics, with
selected (refereed) papers presented at the Astronomical Data Analysis
Conference (ADA VI) held in Monastir, Tunisia, in May 2010. Update corrects
equation (3
MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework
As large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset
- …