911 research outputs found

    Monotonicity in Ant Colony Classification Algorithms

    Get PDF
    Classification algorithms generally do not use existing domain knowledge during model construction. The creation of models that conflict with existing knowledge can reduce model acceptance, as users have to trust the models they use. Domain knowledge can be integrated into algorithms using semantic constraints to guide model construction. This paper proposes an extension to an existing ACO-based classification rule learner to create lists of monotonic classification rules. The proposed algorithm was compared to a majority classifier and the Ordinal Learning Model (OLM) monotonic learner. Our results show that the proposed algorithm successfully outperformed OLM’s predictive accuracy while still producing monotonic models

    Threshold-optimized decision-level fusion and its application to biometrics

    Get PDF
    Fusion is a popular practice to increase the reliability of biometric verification. In this paper, we propose an optimal fusion scheme at decision level by the AND or OR rule, based on optimizing matching score thresholds. The proposed fusion scheme will always give an improvement in the Neyman–Pearson sense over the component classifiers that are fused. The theory of the threshold-optimized decision-level fusion is presented, and the applications are discussed. Fusion experiments are done on the FRGC database which contains 2D texture data and 3D shape data. The proposed decision fusion improves the system performance, in a way comparable to or better than the conventional score-level fusion. It is noteworthy that in practice, the threshold-optimized decision-level fusion by the OR rule is especially useful in presence of outliers

    An Investigation of Argumentation Theory for the Prediction of Survival in Elderly Using Biomarkers

    Get PDF
    Research on the discovery, classification and validation of biological markers, or biomarkers, have grown extensively in the last decades. Newfound and correctly validated biomarkers have great potential as prognostic and diagnostic indicators, but present a complex relationship with pertinent endpoints such as survival or other diseases manifestations. This research proposes the use of computational argumentation theory as a starting point for the resolution of this problem for cases in which a large amount of data is unavailable. A knowledge-base containing 51 different biomarkers and their association with mortality risks in elderly was provided by a clinician. It was applied for the construction of several argument-based models capable of inferring survival or not. The prediction accuracy and sensitivity of these models were investigated, showing how these are in line with inductive classification using decision trees with limited data

    Mprolog as an expert system development tool

    Full text link
    The difficult process of designing expert systems has caused the development of many useful expert system design tools. A new tool, called MPROLOG, has been developed into an expert system tool by giving the designer the power of a programming language, PROLOG, along with a method for specifying uncertainties. Descriptions of KEE and ART, two popular expert system design tools, and MPROLOG are presented along with a description of perhaps the most important phase of expert system design: knowledge acquisition. An analysis of the implementation of an MPROLOG expert system, the F-111 Wing Commander, throughout the knowledge acquisition and design phases is also documented. Finally, an evaluation of MPROLOG as an expert system design tool is presented

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications

    From Data Fusion to Knowledge Fusion

    Get PDF
    The task of {\em data fusion} is to identify the true values of data items (eg, the true date of birth for {\em Tom Cruise}) among multiple observed values drawn from different sources (eg, Web sites) of varying (and unknown) reliability. A recent survey\cite{LDL+12} has provided a detailed comparison of various fusion methods on Deep Web data. In this paper, we study the applicability and limitations of different fusion techniques on a more challenging problem: {\em knowledge fusion}. Knowledge fusion identifies true subject-predicate-object triples extracted by multiple information extractors from multiple information sources. These extractors perform the tasks of entity linkage and schema alignment, thus introducing an additional source of noise that is quite different from that traditionally considered in the data fusion literature, which only focuses on factual errors in the original sources. We adapt state-of-the-art data fusion techniques and apply them to a knowledge base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B Web pages, which is three orders of magnitude larger than the data sets used in previous data fusion papers. We show great promise of the data fusion approaches in solving the knowledge fusion problem, and suggest interesting research directions through a detailed error analysis of the methods.Comment: VLDB'201
    corecore