78,299 research outputs found

    Rough Set Based Approach for IMT Automatic Estimation

    Get PDF
    Carotid artery (CA) intima-media thickness (IMT) is commonly deemed as one of the risk marker for cardiovascular diseases. The automatic estimation of the IMT on ultrasound images is based on the correct identification of the lumen-intima (LI) and media-adventitia (MA) interfaces. This task is complicated by noise, vessel morphology and pathology of the carotid artery. In a previous study we applied four non-linear methods for feature selection on a set of variables extracted from ultrasound carotid images. The main aim was to select those parameters containing the highest amount of information useful to classify the image pixels in the carotid regions they belong to. In this study we present a pixel classifier based on the selected features. Once the pixels classification was correctly performed, the IMT was evaluated and compared with two sets of manual-traced profiles. The results showed that the automatic IMTs are not statistically different from the manual one

    Computing fuzzy rough approximations in large scale information systems

    Get PDF
    Rough set theory is a popular and powerful machine learning tool. It is especially suitable for dealing with information systems that exhibit inconsistencies, i.e. objects that have the same values for the conditional attributes but a different value for the decision attribute. In line with the emerging granular computing paradigm, rough set theory groups objects together based on the indiscernibility of their attribute values. Fuzzy rough set theory extends rough set theory to data with continuous attributes, and detects degrees of inconsistency in the data. Key to this is turning the indiscernibility relation into a gradual relation, acknowledging that objects can be similar to a certain extent. In very large datasets with millions of objects, computing the gradual indiscernibility relation (or in other words, the soft granules) is very demanding, both in terms of runtime and in terms of memory. It is however required for the computation of the lower and upper approximations of concepts in the fuzzy rough set analysis pipeline. Current non-distributed implementations in R are limited by memory capacity. For example, we found that a state of the art non-distributed implementation in R could not handle 30,000 rows and 10 attributes on a node with 62GB of memory. This is clearly insufficient to scale fuzzy rough set analysis to massive datasets. In this paper we present a parallel and distributed solution based on Message Passing Interface (MPI) to compute fuzzy rough approximations in very large information systems. Our results show that our parallel approach scales with problem size to information systems with millions of objects. To the best of our knowledge, no other parallel and distributed solutions have been proposed so far in the literature for this problem

    Recognition of human body posture from a cloud of 3D data points using wavelet transform coefficients

    Get PDF
    Addresses the problem of recognizing a human body posture from a cloud of 3D points acquired by a human body scanner. Motivated by finding a representation that embodies a high discriminatory power between posture classes, a new type of feature is suggested, namely the wavelet transform coefficients (WTC) of the 3D data-point distribution projected on to the space of spherical harmonics. A feature selection technique is developed to find those features with high discriminatory power. Integrated within a Bayesian classification framework and compared with other standard features, the WTC showed great capability in discriminating between close postures. The qualities of the WTC features were also reflected in the experimental results carried out with artificially generated postures, where the WTC obtained the best classification rat

    The Challenge of Non-Technical Loss Detection using Artificial Intelligence: A Survey

    Get PDF
    Detection of non-technical losses (NTL) which include electricity theft, faulty meters or billing errors has attracted increasing attention from researchers in electrical engineering and computer science. NTLs cause significant harm to the economy, as in some countries they may range up to 40% of the total electricity distributed. The predominant research direction is employing artificial intelligence to predict whether a customer causes NTL. This paper first provides an overview of how NTLs are defined and their impact on economies, which include loss of revenue and profit of electricity providers and decrease of the stability and reliability of electrical power grids. It then surveys the state-of-the-art research efforts in a up-to-date and comprehensive review of algorithms, features and data sets used. It finally identifies the key scientific and engineering challenges in NTL detection and suggests how they could be addressed in the future
    • …
    corecore