9 research outputs found

    Quantum Cognition based on an Ambiguous Representation Derived from a Rough Set Approximation

    Full text link
    Over the last years, in a series papers by Arrechi and others, a model for the cognitive processes involved in decision making has been proposed and investigated. The key element of this model is the expression of apprehension and judgement, basic cognitive process of decision making, as an inverse Bayes inference classifying the information content of neuron spike trains. For successive plural stimuli, it has been shown that this inference, equipped with basic non-algorithmic jumps, is affected by quantum-like characteristics. We show here that such a decision making process is related consistently with ambiguous representation by an observer within a universe of discourse. In our work ambiguous representation of an object or a stimuli is defined by a pair of maps from objects of a set to their representations, where these two maps are interrelated in a particular structure. The a priori and a posteriori hypotheses in Bayes inference are replaced by the upper and lower approximation, correspondingly, for the initial data sets each derived with respect to a map. We show further that due to the particular structural relation between the two maps, the logical structure of such combined approximations can only be expressed as an orthomodular lattice and therefore can be represented by a quantum rather than a Boolean logic. To our knowledge, this is the first investigation aiming to reveal the concrete logic structure of inverse Bayes inference in cognitive processes.Comment: 23 pages, 8 figures, original research pape

    A New Information Filling Technique Based On Generalized Information Entropy

    Get PDF
    Multi-sensor decision fusion used for discovering important facts hidden in a mass of data has become a widespread topic in recent years, and has been gradually applied in failure analysis, system evaluation and other fields of big data process. The solution to incompleteness is a key problem of decision fusion during the experiment and has been basically solved by proposed technique in this paper. Firstly, as a generalization of classical rough set, interval similarity relation is employed to classify not only single-valued data but also interval-valued data in the information systems. Then, a new kind of generalized information entropy called "H’-Information Entropy" is suggested based on interval similarity relation to measure the uncertainty and  the classification ability in the information systems. Thus, the innovated information filling technique using the properties of H’-Information Entropy can be applied to replace the missing data by some smaller estimation intervals. Finally, the feasibility and advantage of this technique are testified by two actual applications of decision fusion, whose performance is evaluated by the quantification of E-Condition Entropy

    Rating methodology for real estate markets – Poland case study

    Get PDF
    The development of the real estate market is conditioned by a variety of endogenous and exogenous factors. Selected factors determine the local character of the real estate market, whereas others contribute to its classification as one of the main branches of the national economy. Rapid economic growth and the search for new investment opportunities have turned the real estate market into a highly competitive arena where various players carry out diverse investment strategies. Investors search for similarities that would enable them to develop risk minimizing strategies. Ratings are a modern tool that can be deployed in analyses and predictions of real estate market potential. This paper proposes a methodology for developing real estate market ratings, and it identifies the types of information and factors which affect decision-making on real estate markets. The following research hypotheses are formulated and tested in the article: 1) a real estate market can be rated in view of its significance for the local and national economy, 2) real estate market ratings support market participants in the decision-making process

    Selección de Características de Microarreglos de ADN Utilizando una Búsqueda Cuckoo

    Get PDF
    En este artículo, se propone un método híbrido para la selección y clasificación de datos de microarreglos de AND. Primero, el método combina los subconjuntos de genes relevantes obtenidos de cinco métodos de filtro, después, se implementa un algoritmo basado en una búsqueda cuckoo combinado con un clasificador MSV. El algoritmo híbrido explora dentro del subconjunto obtenido en la etapa anterior y selecciona los genes que alcanzan un alto desempeño al entrenar al clasificador. En los resultados experimentales, el algoritmo obtiene una tasa de clasificación alta seleccionado un número pequeño de genes, los resultados obtenidos son comparados con otros métodos reportados en la literatura

    Rough set and rule-based multicriteria decision aiding

    Get PDF
    The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA). DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems

    Genetic And Evolutionary Biometrics:Multiobjective, Multimodal, Feature Selection/Weighting For Tightly Coupled Periocular And Face Recognition

    Get PDF
    The Genetic & Evolutionary Computation (GEC) research community has seen the emergence of a new subarea, referred to as Genetic & Evolutionary Biometrics (GEB), as GECs have been applied to solve a variety of biometric problems. In this dissertation, we present three new GEB techniques for multibiometric recognition: Genetic & Evolutionary Feature Selection (GEFeS), Weighting (GEFeW), and Weighting/Selection (GEFeWS). Instead of selecting the most salient individual features, these techniques evolve subsets of the most salient combinations of features and/or weight features based on their discriminative ability in an effort to increase accuracy while decreasing the overall number of features needed for recognition. We also incorporate cross validation into our best performing technique in an attempt to evolve feature masks (FMs) that also generalize well to unseen subjects and we search the value preference space in an attempt to analyze its impact in respect to optimization and generalization. Our results show that by fusing the periocular biometric with the face, we can achieve higher recognition accuracies than using the two biometric modalities independently. Our results also show that our GEB techniques are able to achieve higher recognition rates than the baseline methods, while using significantly fewer features. In addition, by incorporating machine learning, we were able to create FMs that also generalize well to unseen subjects and use less than 50% of the extracted features. Finally, by searching the value preference space, we were able to determine which weights were most effective in terms of optimization and generalization

    A Survey on Evolutionary Computation Approaches to Feature Selection

    Get PDF
    Feature selection is an important task in data mining and machine learning to reduce the dimensionality of the data and increase the performance of an algorithm, such as a classification algorithm. However, feature selection is a challenging task due mainly to the large search space. A variety of methods have been applied to solve feature selection problems, where evolutionary computation (EC) techniques have recently gained much attention and shown some success. However, there are no comprehensive guidelines on the strengths and weaknesses of alternative approaches. This leads to a disjointed and fragmented field with ultimately lost opportunities for improving performance and successful applications. This paper presents a comprehensive survey of the state-of-the-art work on EC for feature selection, which identifies the contributions of these different algorithms. In addition, current issues and challenges are also discussed to identify promising areas for future research.</p
    corecore