166,053 research outputs found

    Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy

    Get PDF
    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis

    Scene representation and matching for visual localization in hybrid camera scenarios

    Get PDF
    Scene representation and matching are crucial steps in a variety of tasks ranging from 3D reconstruction to virtual/augmented/mixed reality applications, to robotics, and others. While approaches exist that tackle these tasks, they mostly overlook the issue of efficiency in the scene representation, which is fundamental in resource-constrained systems and for increasing computing speed. Also, they normally assume the use of projective cameras, while performance on systems based on other camera geometries remains suboptimal. This dissertation contributes with a new efficient scene representation method that dramatically reduces the number of 3D points. The approach sets up an optimization problem for the automated selection of the most relevant points to retain. This leads to a constrained quadratic program, which is solved optimally with a newly introduced variant of the sequential minimal optimization method. In addition, a new initialization approach is introduced for the fast convergence of the method. Extensive experimentation on public benchmark datasets demonstrates that the approach produces a compressed scene representation quickly while delivering accurate pose estimates. The dissertation also contributes with new methods for scene matching that go beyond the use of projective cameras. Alternative camera geometries, like fisheye cameras, produce images with very high distortion, making current image feature point detectors and descriptors less efficient, since designed for projective cameras. New methods based on deep learning are introduced to address this problem, where feature detectors and descriptors can overcome distortion effects and more effectively perform feature matching between pairs of fisheye images, and also between hybrid pairs of fisheye and perspective images. Due to the limited availability of fisheye-perspective image datasets, three datasets were collected for training and testing the methods. The results demonstrate an increase of the detection and matching rates which outperform the current state-of-the-art methods

    Automated feature engineering based on Bayesian optimization for time series

    Get PDF
    Nowadays, the forecasting time series task is relevant in solution of a wide range of problems in various spheres of human activities. One of the possible variants to provide prediction is to construct a forecasting model. The main criterion for the forecasting model quality is its accuracy. Researchers have resorted to different approaches to achieve the necessary accuracy of the forecasting model, including feature engineering. This paper presents an automated feature engineering method based on Bayesian optimization for time series data. The process of selection an optimal set of features in order to minimize the objective function is described. The developed method has an ability to create new features based on existing ones by using diverse algebraic operations. The proposed method considers any machine learning model as a black box, that allows applying different algorithms: linear regression, decision trees, neural networks, etc. The experiments demonstrated the high efficiency of the proposed approach. A comparative analysis showed that the developed algorithm in most cases was superior to human-made custom feature engineering. The accuracy of machine learning models is greatly improved with high-quality feature engineering. Mean squared error and coefficient of determination were applied to calculate quality metrics of machine learning models. Testing the developed method took place on open time series data from different subject areas (energy, manufacturing, air pollution), which provided reliable verification

    Automated Deception Detection from Videos: Using End-to-End Learning Based High-Level Features and Classification Approaches

    Full text link
    Deception detection is an interdisciplinary field attracting researchers from psychology, criminology, computer science, and economics. We propose a multimodal approach combining deep learning and discriminative models for automated deception detection. Using video modalities, we employ convolutional end-to-end learning to analyze gaze, head pose, and facial expressions, achieving promising results compared to state-of-the-art methods. Due to limited training data, we also utilize discriminative models for deception detection. Although sequence-to-class approaches are explored, discriminative models outperform them due to data scarcity. Our approach is evaluated on five datasets, including a new Rolling-Dice Experiment motivated by economic factors. Results indicate that facial expressions outperform gaze and head pose, and combining modalities with feature selection enhances detection performance. Differences in expressed features across datasets emphasize the importance of scenario-specific training data and the influence of context on deceptive behavior. Cross-dataset experiments reinforce these findings. Despite the challenges posed by low-stake datasets, including the Rolling-Dice Experiment, deception detection performance exceeds chance levels. Our proposed multimodal approach and comprehensive evaluation shed light on the potential of automating deception detection from video modalities, opening avenues for future research.Comment: 29 pages, 17 figures (19 if counting subfigures

    Plane-based registration of construction laser scans with 3D/4D building models

    Get PDF

    Recognizing contextual valence shifters in document-level sentiment classification

    Get PDF
    Sentiment classification is an emerging research field. Due to the rich opinionated web content, people and organizations are interested in knowing others\u27 opinions, so they need an automated tool for analyzing and summarizing these opinions. One of the major tasks of sentiment classification is to classify a document (i.e. a blog, news article or review) as holding an overall positive or negative sentiment. Machine learning approaches have succeeded in achieving better results than semantic orientation approaches in document-level sentiment classification; however, they still need to take linguistic context into account, by making use of the so-called contextual valence shifters. Early research has tried to add sentiment features and contextual valence shifters to the machine learning approach to tackle this problem, but the classifier\u27s performance was low.In this study, we would like to improve the performance of document-level sentiment classification using the machine learning approach by proposing new feature sets that refine the traditional sentiment feature extraction method and take contextual valence shifters into consideration from a different perspective than the earlier research. These feature sets include: 1) a feature set consisting of 16 features for counting different categories of contextual valence shifters (intensifiers, negators and polarity shifters) as well as the frequency of words grouped according to their final (modified) polarity; and 2) another feature set consisting of the frequency of each sentiment word after modifying its prior polarity. We performed several experiments to: 1) compare our proposed feature sets with the traditional sentiment features that count the frequency of each sentiment word while disregarding its prior polarity; 2) compare our proposed feature sets after combining them with stylistic features and n-grams with traditional sentiment features combined with stylistic features and n-grams; and 3) evaluate the effectiveness of our proposed feature sets against stylistic features and n-grams by performing feature selection. The results of all the experiments show a significant improvement over the baselines, in terms of the accuracy, precision and recall, which indicate that our proposed feature sets are effective in document-level sentiment classification

    Colour normalisation to reduce inter-patient and intra-patient variability in microaneurysm detection in colour retinal images

    Get PDF
    Images of the human retina vary considerably in their appearance depending on the skin pigmentation (amount of melanin) of the subject. Some form of normalisation of colour in retinal images is required for automated analysis of images if good sensitivity and specificity at detecting lesions is to be achieved in populations involving diverse races. Here we describe an approach to colour normalisation by shade-correction intra-image and histogram normalisation inter-image. The colour normalisation is assessed by its effect on the automated detection of microaneurysms in retinal images. It is shown that the Na¨ıve Bayes classifier used in microaneurysm detection benefits from the use of features measured over colour normalised images
    corecore