14 research outputs found
A Statistical Analysis of the "Internal Linear Combination" Method in Problems of Signal Separation as in CMB Observations
AIMS: The separation of foreground contamination from cosmic microwave
background (CMB) observations is one of the most challenging and important
problem of digital signal processing in Cosmology. In literature, various
techniques have been presented, but no general consensus about their real
performances and properties has been reached. This is due to the
characteristics of these techniques that have been studied essentially through
numerical simulations based on semi-empirical models of the CMB and the
Galactic foregrounds. Such models often have different level of sophistication
and/or are based on different physical assumptions (e.g., the number of the
Galactic components and the level of the noise). Hence, a reliable comparison
is difficult. What actually is missing is a statistical analysis of the
properties of the proposed methodologies. Here, we consider the "Internal
Linear Combination" method (ILC) which, among the separation techniques,
requires the smallest number of "a priori" assumptions. This feature is of
particular interest in the context of the CMB polarization measurements at
small angular scales where the lack of knowledge of the polarized backgrounds
represents a serious limit. METHODS: The statistical characteristics of ILC are
examined through an analytical approach and the basic conditions are fixed in a
way to work satisfactorily. RESULTS: ILC provides satisfactory results only
under rather restrictive conditions. This is a critical fact to take into
consideration in planning the future ground-based observations (e.g., with
ALMA) where, contrary to the satellite experiments, there is the possibility to
have a certain control of the experimental conditions.Comment: A version of this manuscript without figures has been accepted for
publication by A&A. A & A 2008, accepte
Convolutive Blind Source Separation Methods
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
Investigating the effects of a combined spatial and spectral dimensionality reduction approach for aerial hyperspectral target detection applications
Target detection and classification is an important application of hyperspectral imaging in remote sensing. A wide range of algorithms for target detection in hyperspectral images have been developed in the last few decades. Given the nature of hyperspectral images, they exhibit large quantities of redundant information and are therefore compressible. Dimensionality reduction is an effective means of both compressing and denoising data. Although spectral dimensionality reduction is prevalent in hyperspectral target detection applications, the spatial redundancy of a scene is rarely exploited. By applying simple spatial masking techniques as a preprocessing step to disregard pixels of definite disinterest, the subsequent spectral dimensionality reduction process is simpler, less costly and more informative. This paper proposes a processing pipeline to compress hyperspectral images both spatially and spectrally before applying target detection algorithms to the resultant scene. The combination of several different spectral dimensionality reduction methods and target detection algorithms, within the proposed pipeline, are evaluated. We find that the Adaptive Cosine Estimator produces an improved F1 score and Matthews Correlation Coefficient when compared to unprocessed data. We also show that by using the proposed pipeline the data can be compressed by over 90% and target detection performance is maintained
Automatic Image Classification for Planetary Exploration
Autonomous techniques in the context of planetary exploration can maximize scientific return and reduce the need for human involvement. This thesis work studies two main problems in planetary exploration: rock image classification and hyperspectral image classification. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. The proposed feature method is flexible and can outperform manually selected features. In order to take advantage of the unlabelled rock images, we also propose self-taught learning technique to learn the feature representation from unlabelled rock images and then apply the features for the classification of the subclass of rock images. Since combining spatial information with spectral information for classifying hyperspectral images (HSI) can dramatically improve the performance, we first propose an innovative framework to automatically generate spatial-spectral features for HSI. Two unsupervised learning methods, K-means and PCA, are utilized to learn the spatial feature bases in each decorrelated spectral band. Then spatial-spectral features are generated by concatenating the spatial feature representations in all/principal spectral bands. In the second work for HSI classification, we propose to stack the spectral patches to reduce the spectral dimensionality and generate 2-D spectral quilts. Such quilts retain all the spectral information and can result in less convolutional parameters in neural networks. Two light convolutional neural networks are then designed to classify the spectral quilts. As the third work for HSI classification, we propose a combinational fully convolutional network. The network can not only take advantage of the inherent computational efficiency of convolution at prediction time, but also perform as a collection of many paths and has an ensemble-like behavior which guarantees the robust performance
Beta hebbian learning: definition and analysis of a new family of learning rules for exploratory projection pursuit
[EN] This thesis comprises an investigation into the derivation of learning rules in artificial neural networks from probabilistic criteria.
•Beta Hebbian Learning (BHL).
First of all, it is derived a new family of learning rules which are based on maximising the likelihood of the residual from a negative feedback network when such residual is deemed to come from the Beta Distribution, obtaining an algorithm called Beta Hebbian Learning, which outperforms current neural algorithms in Exploratory Projection Pursuit.
• Beta-Scale Invariant Map (Beta-SIM).
Secondly, Beta Hebbian Learning is applied to a well-known Topology Preserving Map algorithm called Scale Invariant Map (SIM) to design a new of its version called Beta-Scale Invariant Map (Beta-SIM). It is developed to facilitate the clustering and visualization of the internal structure of high dimensional complex datasets effectively and efficiently, specially those characterized by having internal radial distribution. The Beta-SIM behaviour is thoroughly analysed comparing its results, in terms performance quality measures with other well-known topology preserving models.
• Weighted Voting Superposition Beta-Scale Invariant Map (WeVoS-Beta-SIM).
Finally, the use of ensembles such as the Weighted Voting Superposition (WeVoS) is tested over the previous novel Beta-SIM algorithm, in order to improve its stability and to generate accurate topology maps when using complex datasets. Therefore, the WeVoS-Beta-Scale Invariant Map (WeVoS-Beta-SIM), is presented, analysed and compared with other well-known topology preserving models.
All algorithms have been successfully tested using different artificial datasets to corroborate their properties and also with high-complex real datasets.[ES] Esta tesis abarca la investigación sobre la derivación de reglas de aprendizaje en redes neuronales
artificiales a partir de criterios probabilÃsticos.
• Beta Hebbian Learning (BHL).
En primer lugar, se deriva una nueva familia de reglas de aprendizaje basadas en maximizar la
probabilidad del residuo de una red con retroalimentación negativa cuando se considera que
dicho residuo proviene de la Distribución Beta, obteniendo un algoritmo llamado Beta Hebbian
Learning, que mejora a algoritmos neuronales actuales de búsqueda de proyecciones
exploratorias.
• Beta-Scale Invariant Map (Beta-SIM).
En Segundo lugar, Beta Hebbian Learning se aplica a un conocido algoritmo de Mapa de
Preservación de la TopologÃa llamado Scale Invariant Map (SIM) para diseñar una nueva versión
llamada Beta-Scale Invariant Map (Beta-SIM). Este nuevo algoritmo ha sido desarrollado para
facilitar el agrupamiento y visualización de la estructura interna de conjuntos de datos complejos
de alta dimensionalidad de manera eficaz y eficiente, especialmente aquellos caracterizados por
tener una distribución radial interna. El comportamiento de Beta-SIM es analizado en
profundidad comparando sus resultados, en términos de medidas de calidad de rendimiento con
otros modelos bien conocidos de preservación de topologÃa.
• Weighted Voting Superposition Beta-Scale Invariant Map (WeVoS-Beta-SIM).
Finalmente, el uso de ensembles como el Weighted Voting Superposition (WeVoS) sobre el
algoritmo Beta-SIM es probado, con objeto de mejorar su estabilidad y generar mapas
topológicos precisos cuando se utilizan conjuntos de datos complejos. Por lo tanto, se presenta,
analiza y compara el WeVoS-Beta-Scale Invariant Map (WeVoS-Beta-SIM) con otros modelos
bien conocidos de preservación de topologÃa.
Todos los algoritmos han sido probados con éxito sobre conjuntos de datos artificiales para corroborar
sus propiedades, asà como con conjuntos de datos reales de gran complejidad
STK /WST 795 Research Reports
These documents contain the honours research reports for each year for the Department of Statistics.Honours Research Reports - University of Pretoria 20XXStatisticsBSs (Hons) Mathematical Statistics, BCom (Hons) Statistics, BCom (Hons) Mathematical StatisticsUnrestricte
Temporal properties of rehearsal in auditory-verbal short-term memory
Subvocal rehearsal, the use of inner speech for the maintenance of phonological material, is thought to play an important role verbal short-term memory (STM). The importance of rehearsal is based largely on indirect measures, as it is difficult to detect and quantify. To address this issue and investigate rehearsal timing, a novel ‘rehearsal-probe’ task was developed. Individuals silently rehearsed an auditory-verbal sequence, responding after an unpredictable probe (tone) by indicating the item currently being rehearsed. The presentation of probes after variable and repeated delays provides item response proportions over time. The data were analysed using a theory-neutral measure of temporal precision; the circular standard deviations of response distributions.
The methods were established across seven experiments designed to explore whether timing precision is fixed or resource-limited. Experiment 3 showed that timing precision decreases with increased in memory load. Temporal precision was negatively correlated with auditory-verbal STM span in six experiments, including one designed specifically to examine individual differences. Experiments 6 and 7 investigated timing in developmental language disorders, which are characterized by serial ordering deficits. Adults with dyslexia and children with language impairments showed more temporal imprecision compared to matched controls. These results suggest that temporal precision is limited by shared resources and may play a role in language development.
A computational model was also developed to describe the data with four separable temporal properties. The model captured the main characteristics of the data and provided quantitative estimates of each property. In an EEG experiment, event-related responses to item probes were modulated by the contents of rehearsal, and there was increased spectral power at the item rate during sequence presentation and rehearsal, but not baseline, periods. The findings suggest an important role for fine-grained timing information in serial order STM and have broader implications for debates about models of serial order
Review of Particle Physics
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 2,143 new measurements from 709 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. Particle properties and search limits are listed in Summary Tables. We give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 120 reviews are many that are new or heavily revised, including a new review on Machine Learning, and one on Spectroscopy of Light Meson Resonances.
The Review is divided into two volumes. Volume 1 includes the Summary Tables and 97 review articles. Volume 2 consists of the Particle Listings and contains also 23 reviews that address specific aspects of the data presented in the Listings