1,235 research outputs found
Natural Image Coding in V1: How Much Use is Orientation Selectivity?
Orientation selectivity is the most striking feature of simple cell coding in
V1 which has been shown to emerge from the reduction of higher-order
correlations in natural images in a large variety of statistical image models.
The most parsimonious one among these models is linear Independent Component
Analysis (ICA), whereas second-order decorrelation transformations such as
Principal Component Analysis (PCA) do not yield oriented filters. Because of
this finding it has been suggested that the emergence of orientation
selectivity may be explained by higher-order redundancy reduction. In order to
assess the tenability of this hypothesis, it is an important empirical question
how much more redundancies can be removed with ICA in comparison to PCA, or
other second-order decorrelation methods. This question has not yet been
settled, as over the last ten years contradicting results have been reported
ranging from less than five to more than hundred percent extra gain for ICA.
Here, we aim at resolving this conflict by presenting a very careful and
comprehensive analysis using three evaluation criteria related to redundancy
reduction: In addition to the multi-information and the average log-loss we
compute, for the first time, complete rate-distortion curves for ICA in
comparison with PCA. Without exception, we find that the advantage of the ICA
filters is surprisingly small. Furthermore, we show that a simple spherically
symmetric distribution with only two parameters can fit the data even better
than the probabilistic model underlying ICA. Since spherically symmetric models
are agnostic with respect to the specific filter shapes, we conlude that
orientation selectivity is unlikely to play a critical role for redundancy
reduction
An Independent Component Analysis Based Tool for Exploring Functional Connections in the Brain
This thesis describes the use of independent component analysis (ICA) as
a measure of voxel similarity, which allows the user to find and view
statistically independent maps of correlated voxel activity.
The tool developed in this work uses a specialized clustering technique,
designed to find and characterize clusters of activated voxels, to
compare the independent component spatial maps across patients. This
same method is also used to compare SPM results across patients
Non Linear Blind Source Separation Using Different Optimization Techniques
The Independent Component Analysis technique has been used in Blind Source separation of non linear mixtures. The project involves the blind source separation of a non linear mixture of signals based on their mutual independence as the evaluation criteria. The linear mixer is modeled by the Fast ICA algorithm while the Non linear mixer is modeled by an odd polynomial function whose parameters are updated by four separate optimization techniques which are Particle Swarm Optimization, Real coded Genetic Algorithm, Binary Genetic Algorithm and Bacterial Foraging Optimization. The separated mixture outputs of each case was studied and the mean square error in each case was compared giving an idea of the effectiveness of each optimization technique
Towards music perception by redundancy reduction and unsupervised learning in probabilistic models
PhDThe study of music perception lies at the intersection of several disciplines: perceptual
psychology and cognitive science, musicology, psychoacoustics, and acoustical
signal processing amongst others. Developments in perceptual theory over the last
fifty years have emphasised an approach based on Shannon’s information theory and
its basis in probabilistic systems, and in particular, the idea that perceptual systems
in animals develop through a process of unsupervised learning in response to natural
sensory stimulation, whereby the emerging computational structures are well adapted
to the statistical structure of natural scenes. In turn, these ideas are being applied to
problems in music perception.
This thesis is an investigation of the principle of redundancy reduction through
unsupervised learning, as applied to representations of sound and music.
In the first part, previous work is reviewed, drawing on literature from some of the
fields mentioned above, and an argument presented in support of the idea that perception
in general and music perception in particular can indeed be accommodated within
a framework of unsupervised learning in probabilistic models.
In the second part, two related methods are applied to two different low-level representations.
Firstly, linear redundancy reduction (Independent Component Analysis)
is applied to acoustic waveforms of speech and music. Secondly, the related method of
sparse coding is applied to a spectral representation of polyphonic music, which proves
to be enough both to recognise that the individual notes are the important structural elements,
and to recover a rough transcription of the music.
Finally, the concepts of distance and similarity are considered, drawing in ideas
about noise, phase invariance, and topological maps. Some ecologically and information
theoretically motivated distance measures are suggested, and put in to practice in
a novel method, using multidimensional scaling (MDS), for visualising geometrically
the dependency structure in a distributed representation.Engineering and Physical Science Research Counci
Evaluation of face recognition algorithms under noise
One of the major applications of computer vision and image processing is face recognition,
where a computerized algorithm automatically identifies a person’s face from
a large image dataset or even from a live video. This thesis addresses facial recognition,
a topic that has been widely studied due to its importance in many applications
in both civilian and military domains. The application of face recognition systems
has expanded from security purposes to social networking sites, managing fraud, and
improving user experience. Numerous algorithms have been designed to perform face
recognition with good accuracy. This problem is challenging due to the dynamic nature
of the human face and the different poses that it can take. Regardless of the
algorithm, facial recognition accuracy can be heavily affected by the presence of noise.
This thesis presents a comparison of traditional and deep learning face recognition
algorithms under the presence of noise. For this purpose, Gaussian and salt-andpepper
noises are applied to the face images drawn from the ORL Dataset. The
image recognition is performed using each of the following eight algorithms: principal
component analysis (PCA), two-dimensional PCA (2D-PCA), linear discriminant
analysis (LDA), independent component analysis (ICA), discrete cosine transform
(DCT), support vector machine (SVM), convolution neural network (CNN) and Alex
Net. The ORL dataset was used in the experiments to calculate the evaluation accuracy
for each of the investigated algorithms. Each algorithm is evaluated with two
experiments; in the first experiment only one image per person is used for training,
whereas in the second experiment, five images per person are used for training. The investigated traditional algorithms are implemented with MATLAB and the deep
learning algorithms approaches are implemented with Python. The results show that
the best performance was obtained using the DCT algorithm with 92% dominant
eigenvalues and 95.25 % accuracy, whereas for deep learning, the best performance
was using a CNN with accuracy of 97.95%, which makes it the best choice under noisy
conditions
- …