1,393 research outputs found

    Adaptive imputation of missing values for incomplete pattern classification

    Get PDF
    In classification of incomplete pattern, the missing values can either play a crucial role in the class determination, or have only little influence (or eventually none) on the classification results according to the context. We propose a credal classification method for incomplete pattern with adaptive imputation of missing values based on belief function theory. At first, we try to classify the object (incomplete pattern) based only on the available attribute values. As underlying principle, we assume that the missing information is not crucial for the classification if a specific class for the object can be found using only the available information. In this case, the object is committed to this particular class. However, if the object cannot be classified without ambiguity, it means that the missing values play a main role for achieving an accurate classification. In this case, the missing values will be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM) techniques, and the edited pattern with the imputation is then classified. The (original or edited) pattern is respectively classified according to each training class, and the classification results represented by basic belief assignments are fused with proper combination rules for making the credal classification. The object is allowed to belong with different masses of belief to the specific classes and meta-classes (which are particular disjunctions of several single classes). The credal classification captures well the uncertainty and imprecision of classification, and reduces effectively the rate of misclassifications thanks to the introduction of meta-classes. The effectiveness of the proposed method with respect to other classical methods is demonstrated based on several experiments using artificial and real data sets

    Remaining Useful Life Estimation by ClassiïŹcation of Predictions Based on a Neuro-Fuzzy System and Theory of Belief Functions.

    No full text
    International audienceVarious approaches for prognostics have been developed, and data-driven methods are increasingly applied. The training step of these methods generally requires huge datasets to build a model of the degradation signal, and estimate the limit under which the degradation signal should stay. Applicability and accuracy of these methods are thereby closely related to the amount of available data, and even sometimes requires the user to make assumptions on the dynamics of health states evolution. Following that, the aim of this paper is to propose a method for prognostics and remaining useful life estimation that starts from scratch, without any prior knowledge. Assuming that remaining useful life can be seen as the time between the current time and the instant where the degradation is above an acceptable limit, the proposition is based on a classification of prediction strategy (CPS) that relies on two factors. First, it relies on the use of an evolving real-time neuro-fuzzy system that forecasts observations in time. Secondly, it relies on the use of an evidential Markovian classifier based on Dempster-Shafer theory that enables classifying observations into the possible functioning modes. This approach has the advantage to cope with a lack of data using an evolving system, and theory of belief functions. Also, one of the main assets is the possibility to train the prognostic system without setting any threshold. The whole proposition is illustrated and assessed by using the CMAPPS turbofan dataset. RUL estimates are shown to be very close to actual values, and the approach appears to accurately estimate the failure instants, even with few learning data

    Evidence in Neuroimaging: Towards a Philosophy of Data Analysis

    Get PDF
    Neuroimaging technology is the most widely used tool to study human cognition. While originally a promising tool for mapping the content of cognitive theories onto the structures of the brain, recently developed tools for the analysis, handling and sharing of data have changed the theoretical landscape of cognitive neuroscience. Even with these advancements philosophical analyses of evidence in neuroimaging remain skeptical of the promise of neuroimaging technology. These views often treat the analysis techniques used to make sense of data produced in a neuroimaging experiment as one, attributing the inferential limitations of analysis pipelines to the technology as a whole. Situated against the neuroscientists own critical assessment of their methods and the limitations of those methods, this skepticism appears based on a misunderstanding of the role data analysis techniques play in neuroimaging research. My project picks up here, examining how data analysis techniques, such as pattern classification analysis, are used to assess the evidential value of neuroimaging data. The project takes the form of three papers. In the first I identify the use of multiple data analysis techniques as an important aspect of the data interpretation process that is overlooked by critics. In the second I develop an account of inferences in neuroimaging research that is sensitive to this use of data analysis techniques, arguing that interpreting neuroimaging data is a process of isolating and explaining a variety of data patterns. In the third I argue that the development and uptake of new techniques for analyzing data must be accompanied by changes in research practices and standards of evidence if they are to promote knowledge generation. My approach to this work is both traditionally philosophical, insofar as it involves reading and analyzing the work of philosophers and neuroscientists, and embedded insofar as most of the research was conducted while engaged in attending lab meetings and participating in the work of those scientists whose work is the object of my research

    Surveying human habit modeling and mining techniques in smart spaces

    Get PDF
    A smart space is an environment, mainly equipped with Internet-of-Things (IoT) technologies, able to provide services to humans, helping them to perform daily tasks by monitoring the space and autonomously executing actions, giving suggestions and sending alarms. Approaches suggested in the literature may differ in terms of required facilities, possible applications, amount of human intervention required, ability to support multiple users at the same time adapting to changing needs. In this paper, we propose a Systematic Literature Review (SLR) that classifies most influential approaches in the area of smart spaces according to a set of dimensions identified by answering a set of research questions. These dimensions allow to choose a specific method or approach according to available sensors, amount of labeled data, need for visual analysis, requirements in terms of enactment and decision-making on the environment. Additionally, the paper identifies a set of challenges to be addressed by future research in the field

    Detecting Family Resemblance: Automated Genre Classification.

    Get PDF
    This paper presents results in automated genre classification of digital documents in PDF format. It describes genre classification as an important ingredient in contextualising scientific data and in retrieving targetted material for improving research. The current paper compares the role of visual layout, stylistic features and language model features in clustering documents and presents results in retrieving five selected genres (Scientific Article, Thesis, Periodicals, Business Report, and Form) from a pool of materials populated with documents of the nineteen most popular genres found in our experimental data set.

    Evidential deep learning for arbitrary LIDAR object classification in the context of autonomous driving

    Get PDF
    International audienceIn traditional LIDAR processing pipelines, a point-cloud is split into clusters, or objects, which are classified afterwards. This supposes that all the objects obtained by clustering belong to one of the classes that the classifier can recognize, which is hard to guarantee in practice. We thus propose an evidential end-to-end deep neural network to classify LIDAR objects. The system is capable of classifying ambiguous and incoherent objects as unknown, while only having been trained on vehicles and vulnerable road users. This is achieved thanks to an evidential reformulation of generalized logistic regression classifiers, and an online filtering strategy based on statistical assumptions. The training and testing were realized on LIDAR objects which were labelled in a semi-automatic fashion, and collected in different situations thanks to an autonomous driving and perception platform
    • 

    corecore