5,677 research outputs found
Dublin City University video track experiments for TREC 2003
In this paper, we describe our experiments for both the News Story Segmentation task and Interactive Search task for
TRECVID 2003. Our News Story Segmentation task involved the use of a Support Vector Machine (SVM) to combine evidence from audio-visual analysis tools in order to generate a listing of news stories from a given news programme. Our
Search task experiment compared a video retrieval system based on text, image and relevance feedback with a text-only
video retrieval system in order to identify which was more effective. In order to do so we developed two variations of our Físchlár video retrieval system and conducted user testing in a controlled lab environment. In this paper we outline our work on both of these two tasks
ABC likelihood-freee methods for model choice in Gibbs random fields
Gibbs random fields (GRF) are polymorphous statistical models that can be
used to analyse different types of dependence, in particular for spatially
correlated data. However, when those models are faced with the challenge of
selecting a dependence structure from many, the use of standard model choice
methods is hampered by the unavailability of the normalising constant in the
Gibbs likelihood. In particular, from a Bayesian perspective, the computation
of the posterior probabilities of the models under competition requires special
likelihood-free simulation techniques like the Approximate Bayesian Computation
(ABC) algorithm that is intensively used in population genetics. We show in
this paper how to implement an ABC algorithm geared towards model choice in the
general setting of Gibbs random fields, demonstrating in particular that there
exists a sufficient statistic across models. The accuracy of the approximation
to the posterior probabilities can be further improved by importance sampling
on the distribution of the models. The practical aspects of the method are
detailed through two applications, the test of an iid Bernoulli model versus a
first-order Markov chain, and the choice of a folding structure for two
proteins.Comment: 19 pages, 5 figures, to appear in Bayesian Analysi
Persistent Homology Tools for Image Analysis
Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and
geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this
premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases.
The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of
simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection.
Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny
texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based
algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions
A Review of the Family of Artificial Fish Swarm Algorithms: Recent Advances and Applications
The Artificial Fish Swarm Algorithm (AFSA) is inspired by the ecological
behaviors of fish schooling in nature, viz., the preying, swarming, following
and random behaviors. Owing to a number of salient properties, which include
flexibility, fast convergence, and insensitivity to the initial parameter
settings, the family of AFSA has emerged as an effective Swarm Intelligence
(SI) methodology that has been widely applied to solve real-world optimization
problems. Since its introduction in 2002, many improved and hybrid AFSA models
have been developed to tackle continuous, binary, and combinatorial
optimization problems. This paper aims to present a concise review of the
family of AFSA, encompassing the original ASFA and its improvements,
continuous, binary, discrete, and hybrid models, as well as the associated
applications. A comprehensive survey on the AFSA from its introduction to 2012
can be found in [1]. As such, we focus on a total of {\color{blue}123} articles
published in high-quality journals since 2013. We also discuss possible AFSA
enhancements and highlight future research directions for the family of
AFSA-based models.Comment: 37 pages, 3 figure
Real Coded Binary Artificial Bee Colony (RC-BABC) Based Feature Selection and Relieff Based Feature Extraction Techniques for Heart Disease Prediction
Diagnosing heart disease is really a challenging task for which several intelligent diagnostic systems were developed for enhancing the performance of diagnosing heart disease. However, in these systems, low accuracy of predicting heart disease is still a challenging task. To provide better accuracy in predicting heart risks, a novel feature selection approach is proposed which employs Real Coded Binary Artificial Bee Colony (RC-BABC) optimization algorithm with adaptive size for feature elimination. This method has the advantages of reducing algorithmic computational time, improving prediction accuracy, enhanced data quality, and saves resources in successive data collection phases. Once the features are selected, the important feature extraction phase uses ReliefF based feature extraction method to extract the features from the heart disease data set. The scores of features are computed by estimating a comparison of feature values and class values neighbor samples. The proposed Real Coded Binary Artificial Bee Colony (RC-BABC) optimization algorithm is compared with three well known methods namely an artificial neural network (ANN), K-means clustering approach and Classification and Regression Algorithm (C&RT) with measures like accuracy, precision, recall and F1-score. The proposed method achieved 96.77% of accuracy,98.8% of recall, 97.8% of precision and 98.34% of F1-score
Binary Multi-Verse Optimization (BMVO) Approaches for Feature Selection
Multi-Verse Optimization (MVO) is one of the newest meta-heuristic optimization algorithms which imitates the theory of Multi-Verse in Physics and resembles the interaction among the various universes. In problem domains like feature selection, the solutions are often constrained to the binary values viz. 0 and 1. With regard to this, in this paper, binary versions of MVO algorithm have been proposed with two prime aims: firstly, to remove redundant and irrelevant features from the dataset and secondly, to achieve better classification accuracy. The proposed binary versions use the concept of transformation functions for the mapping of a continuous version of the MVO algorithm to its binary versions. For carrying out the experiments, 21 diverse datasets have been used to compare the Binary MVO (BMVO) with some binary versions of existing metaheuristic algorithms. It has been observed that the proposed BMVO approaches have outperformed in terms of a number of features selected and the accuracy of the classification process
- …