28,899 research outputs found

    Adaptive imputation of missing values for incomplete pattern classification

    Get PDF
    In classification of incomplete pattern, the missing values can either play a crucial role in the class determination, or have only little influence (or eventually none) on the classification results according to the context. We propose a credal classification method for incomplete pattern with adaptive imputation of missing values based on belief function theory. At first, we try to classify the object (incomplete pattern) based only on the available attribute values. As underlying principle, we assume that the missing information is not crucial for the classification if a specific class for the object can be found using only the available information. In this case, the object is committed to this particular class. However, if the object cannot be classified without ambiguity, it means that the missing values play a main role for achieving an accurate classification. In this case, the missing values will be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM) techniques, and the edited pattern with the imputation is then classified. The (original or edited) pattern is respectively classified according to each training class, and the classification results represented by basic belief assignments are fused with proper combination rules for making the credal classification. The object is allowed to belong with different masses of belief to the specific classes and meta-classes (which are particular disjunctions of several single classes). The credal classification captures well the uncertainty and imprecision of classification, and reduces effectively the rate of misclassifications thanks to the introduction of meta-classes. The effectiveness of the proposed method with respect to other classical methods is demonstrated based on several experiments using artificial and real data sets

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Adaptive probability scheme for behaviour monitoring of the elderly using a specialised ambient device

    Get PDF
    A Hidden Markov Model (HMM) modified to work in combination with a Fuzzy System is utilised to determine the current behavioural state of the user from information obtained with specialised hardware. Due to the high dimensionality and not-linearly-separable nature of the Fuzzy System and the sensor data obtained with the hardware which informs the state decision, a new method is devised to update the HMM and replace the initial Fuzzy System such that subsequent state decisions are based on the most recent information. The resultant system first reduces the dimensionality of the original information by using a manifold representation in the high dimension which is unfolded in the lower dimension. The data is then linearly separable in the lower dimension where a simple linear classifier, such as the perceptron used here, is applied to determine the probability of the observations belonging to a state. Experiments using the new system verify its applicability in a real scenario

    Predictive intelligence to the edge through approximate collaborative context reasoning

    Get PDF
    We focus on Internet of Things (IoT) environments where a network of sensing and computing devices are responsible to locally process contextual data, reason and collaboratively infer the appearance of a specific phenomenon (event). Pushing processing and knowledge inference to the edge of the IoT network allows the complexity of the event reasoning process to be distributed into many manageable pieces and to be physically located at the source of the contextual information. This enables a huge amount of rich data streams to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized Cloud system. We propose a lightweight, energy-efficient, distributed, adaptive, multiple-context perspective event reasoning model under uncertainty on each IoT device (sensor/actuator). Each device senses and processes context data and infers events based on different local context perspectives: (i) expert knowledge on event representation, (ii) outliers inference, and (iii) deviation from locally predicted context. Such novel approximate reasoning paradigm is achieved through a contextualized, collaborative belief-driven clustering process, where clusters of devices are formed according to their belief on the presence of events. Our distributed and federated intelligence model efficiently identifies any localized abnormality on the contextual data in light of event reasoning through aggregating local degrees of belief, updates, and adjusts its knowledge to contextual data outliers and novelty detection. We provide comprehensive experimental and comparison assessment of our model over real contextual data with other localized and centralized event detection models and show the benefits stemmed from its adoption by achieving up to three orders of magnitude less energy consumption and high quality of inference

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Adding Contextual Information to Intrusion Detection Systems Using Fuzzy Cognitive Maps

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In the last few years there has been considerable increase in the efficiency of Intrusion Detection Systems (IDSs). However, networks are still the victim of attacks. As the complexity of these attacks keeps increasing, new and more robust detection mechanisms need to be developed. The next generation of IDSs should be designed incorporating reasoning engines supported by contextual information about the network, cognitive information and situational awareness to improve their detection results. In this paper, we propose the use of a Fuzzy Cognitive Map (FCM) in conjunction with an IDS to incorporate contextual information into the detection process. We have evaluated the use of FCMs to adjust the Basic Probability Assignment (BPA) values defined prior to the data fusion process, which is crucial for the IDS that we have developed. The experimental results that we present verify that FCMs can improve the efficiency of our IDS by reducing the number of false alarms, while not affecting the number of correct detections

    Automatic goal allocation for a planetary rover with DSmT

    Get PDF
    In this chapter, we propose an approach for assigning aninterest level to the goals of a planetary rover. Assigning an interest level to goals, allows the rover to autonomously transform and reallocate the goals. The interest level is defined by data-fusing payload and navigation information. The fusion yields an 'interest map',that quantifies the level of interest of each area around the rover. In this way the planner can choose the most interesting scientific objectives to be analysed, with limited human intervention, and reallocates its goals autonomously. The Dezert-Smarandache Theory of Plausible and Paradoxical Reasoning was used for information fusion: this theory allows dealing with vague and conflicting data. In particular, it allows us to directly model the behaviour of the scientists that have to evaluate the relevance of a particular set of goals. This chaptershows an application of the proposed approach to the generation of a reliable interest map

    Multiple Density Maps Information Fusion for Effectively Assessing Intensity Pattern of Lifelogging Physical Activity

    Get PDF
    Physical activity (PA) measurement is a crucial task in healthcare technology aimed at monitoring the progression and treatment of many chronic diseases. Traditional lifelogging PA measures require relatively high cost and can only be conducted in controlled or semi-controlled environments, though they exhibit remarkable precision of PA monitoring outcomes. Recent advancement of commercial wearable devices and smartphones for recording one’s lifelogging PA has popularized data capture in uncontrolled environments. However, due to diverse life patterns and heterogeneity of connected devices as well as the PA recognition accuracy, lifelogging PA data measured by wearable devices and mobile phones contains much uncertainty thereby limiting their adoption for healthcare studies. To improve the feasibility of PA tracking datasets from commercial wearable/mobile devices, this paper proposes a lifelogging PA intensity pattern decision making approach for lifelong PA measures. The method is to firstly remove some irregular uncertainties (IU) via an Ellipse fitting model, and then construct a series of monthly based hour-day density map images for representing PA intensity patterns with regular uncertainties (RU) on each month. Finally it explores Dempster-Shafer theory of evidence fusing information from these density map images for generating a decision making model of a final personal lifelogging PA intensity pattern. The approach has significantly reduced the uncertainties and incompleteness of datasets from third party devices. Two case studies on a mobile personalized healthcare platform MHA [1] connecting the mobile app Moves are carried out. The results indicate that the proposed approach can improve effectiveness of PA tracking devices or apps for various types of people who frequently use them as a healthcare indicator
    corecore