5,280 research outputs found
An M-QAM Signal Modulation Recognition Algorithm in AWGN Channel
Computing the distinct features from input data, before the classification,
is a part of complexity to the methods of Automatic Modulation Classification
(AMC) which deals with modulation classification was a pattern recognition
problem. Although the algorithms that focus on MultiLevel Quadrature Amplitude
Modulation (M-QAM) which underneath different channel scenarios was well
detailed. A search of the literature revealed indicates that few studies were
done on the classification of high order M-QAM modulation schemes like128-QAM,
256-QAM, 512-QAM and1024-QAM. This work is focusing on the investigation of the
powerful capability of the natural logarithmic properties and the possibility
of extracting Higher-Order Cumulant's (HOC) features from input data received
raw. The HOC signals were extracted under Additive White Gaussian Noise (AWGN)
channel with four effective parameters which were defined to distinguished the
types of modulation from the set; 4-QAM~1024-QAM. This approach makes the
recognizer more intelligent and improves the success rate of classification.
From simulation results, which was achieved under statistical models for noisy
channels, manifest that recognized algorithm executes was recognizing in M-QAM,
furthermore, most results were promising and showed that the logarithmic
classifier works well over both AWGN and different fading channels, as well as
it can achieve a reliable recognition rate even at a lower signal-to-noise
ratio (less than zero), it can be considered as an Integrated Automatic
Modulation Classification (AMC) system in order to identify high order of M-QAM
signals that applied a unique logarithmic classifier, to represents higher
versatility, hence it has a superior performance via all previous works in
automatic modulation identification systemComment: 18 page
Named Entity Extraction and Disambiguation: The Reinforcement Effect.
Named entity extraction and disambiguation have received much attention in recent years. Typical fields addressing these topics are information retrieval, natural language processing, and semantic web. Although these topics are highly dependent, almost no existing works examine this dependency. It is the aim of this paper to examine the dependency and show how one affects the other, and vice versa. We conducted experiments with a set of descriptions of holiday homes with the aim to extract and disambiguate toponyms as a representative example of named entities. We experimented with three approaches for disambiguation with the purpose to infer the country of the holiday home. We examined how the effectiveness of extraction influences the effectiveness of disambiguation, and reciprocally, how filtering out ambiguous names (an activity that depends on the disambiguation process) improves the effectiveness of extraction. Since this, in turn, may improve the effectiveness of disambiguation again, it shows that extraction and disambiguation may reinforce each other.\u
Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval
Relevance feedback schemes based on support vector machines (SVM) have been widely used in content-based image retrieval (CBIR). However, the performance of SVM-based relevance feedback is often poor when the number of labeled positive feedback samples is small. This is mainly due to three reasons: 1) an SVM classifier is unstable on a small-sized training set, 2) SVM's optimal hyperplane may be biased when the positive feedback samples are much less than the negative feedback samples, and 3) overfitting happens because the number of feature dimensions is much higher than the size of the training set. In this paper, we develop a mechanism to overcome these problems. To address the first two problems, we propose an asymmetric bagging-based SVM (AB-SVM). For the third problem, we combine the random subspace method and SVM for relevance feedback, which is named random subspace SVM (RS-SVM). Finally, by integrating AB-SVM and RS-SVM, an asymmetric bagging and random subspace SVM (ABRS-SVM) is built to solve these three problems and further improve the relevance feedback performance
A Confidence-Based Approach for Balancing Fairness and Accuracy
We study three classical machine learning algorithms in the context of
algorithmic fairness: adaptive boosting, support vector machines, and logistic
regression. Our goal is to maintain the high accuracy of these learning
algorithms while reducing the degree to which they discriminate against
individuals because of their membership in a protected group.
Our first contribution is a method for achieving fairness by shifting the
decision boundary for the protected group. The method is based on the theory of
margins for boosting. Our method performs comparably to or outperforms previous
algorithms in the fairness literature in terms of accuracy and low
discrimination, while simultaneously allowing for a fast and transparent
quantification of the trade-off between bias and error.
Our second contribution addresses the shortcomings of the bias-error
trade-off studied in most of the algorithmic fairness literature. We
demonstrate that even hopelessly naive modifications of a biased algorithm,
which cannot be reasonably said to be fair, can still achieve low bias and high
accuracy. To help to distinguish between these naive algorithms and more
sensible algorithms we propose a new measure of fairness, called resilience to
random bias (RRB). We demonstrate that RRB distinguishes well between our naive
and sensible fairness algorithms. RRB together with bias and accuracy provides
a more complete picture of the fairness of an algorithm
Automated Visual Fin Identification of Individual Great White Sharks
This paper discusses the automated visual identification of individual great
white sharks from dorsal fin imagery. We propose a computer vision photo ID
system and report recognition results over a database of thousands of
unconstrained fin images. To the best of our knowledge this line of work
establishes the first fully automated contour-based visual ID system in the
field of animal biometrics. The approach put forward appreciates shark fins as
textureless, flexible and partially occluded objects with an individually
characteristic shape. In order to recover animal identities from an image we
first introduce an open contour stroke model, which extends multi-scale region
segmentation to achieve robust fin detection. Secondly, we show that
combinatorial, scale-space selective fingerprinting can successfully encode fin
individuality. We then measure the species-specific distribution of visual
individuality along the fin contour via an embedding into a global `fin space'.
Exploiting this domain, we finally propose a non-linear model for individual
animal recognition and combine all approaches into a fine-grained
multi-instance framework. We provide a system evaluation, compare results to
prior work, and report performance and properties in detail.Comment: 17 pages, 16 figures. To be published in IJCV. Article replaced to
update first author contact details and to correct a Figure reference on page
- …