609 research outputs found

    The structure and formation of natural categories

    Get PDF
    Categorization and concept formation are critical activities of intelligence. These processes and the conceptual structures that support them raise important issues at the interface of cognitive psychology and artificial intelligence. The work presumes that advances in these and other areas are best facilitated by research methodologies that reward interdisciplinary interaction. In particular, a computational model is described of concept formation and categorization that exploits a rational analysis of basic level effects by Gluck and Corter. Their work provides a clean prescription of human category preferences that is adapted to the task of concept learning. Also, their analysis was extended to account for typicality and fan effects, and speculate on how the concept formation strategies might be extended to other facets of intelligence, such as problem solving

    Cognitive finance: Behavioural strategies of spending, saving, and investing.

    Get PDF
    Research in economics is increasingly open to empirical results. The advances in behavioural approaches are expanded here by applying cognitive methods to financial questions. The field of "cognitive finance" is approached by the exploration of decision strategies in the financial settings of spending, saving, and investing. Individual strategies in these different domains are searched for and elaborated to derive explanations for observed irregularities in financial decision making. Strong context-dependency and adaptive learning form the basis for this cognition-based approach to finance. Experiments, ratings, and real world data analysis are carried out in specific financial settings, combining different research methods to improve the understanding of natural financial behaviour. People use various strategies in the domains of spending, saving, and investing. Specific spending profiles can be elaborated for a better understanding of individual spending differences. It was found that people differ along four dimensions of spending, which can be labelled: General Leisure, Regular Maintenance, Risk Orientation, and Future Orientation. Saving behaviour is strongly dependent on how people mentally structure their finance and on their self-control attitude towards decision space restrictions, environmental cues, and contingency structures. Investment strategies depend on how companies, in which investments are placed, are evaluated on factors such as Honesty, Prestige, Innovation, and Power. Further on, different information integration strategies can be learned in decision situations with direct feedback. The mapping of cognitive processes in financial decision making is discussed and adaptive learning mechanisms are proposed for the observed behavioural differences. The construal of a "financial personality" is proposed in accordance with other dimensions of personality measures, to better acknowledge and predict variations in financial behaviour. This perspective enriches economic theories and provides a useful ground for improving individual financial services

    Psychometric Evaluation of the Altered States of Consciousness Rating Scale (OAV)

    Get PDF
    BACKGROUND: The OAV questionnaire has been developed to integrate research on altered states of consciousness (ASC). It measures three primary and one secondary dimensions of ASC that are hypothesized to be invariant across ASC induction methods. The OAV rating scale has been in use for more than 20 years and applied internationally in a broad range of research fields, yet its factorial structure has never been tested by structural equation modeling techniques and its psychometric properties have never been examined in large samples of experimentally induced ASC. METHODOLOGY/PRINCIPAL FINDINGS: The present study conducted a psychometric evaluation of the OAV in a sample of psilocybin (n = 327), ketamine (n = 162), and MDMA (n = 102) induced ASC that was obtained by pooling data from 43 experimental studies. The factorial structure was examined by confirmatory factor analysis, exploratory structural equation modeling, hierarchical item clustering (ICLUST), and multiple indicators multiple causes (MIMIC) modeling. The originally proposed model did not fit the data well even if zero-constraints on non-target factor loadings and residual correlations were relaxed. Furthermore, ICLUST suggested that the "oceanic boundlessness" and "visionary restructuralization" factors could be combined on a high level of the construct hierarchy. However, because these factors were multidimensional, we extracted and examined 11 new lower order factors. MIMIC modeling indicated that these factors were highly measurement invariant across drugs, settings, questionnaire versions, and sexes. The new factors were also demonstrated to have improved homogeneities, satisfactory reliabilities, discriminant and convergent validities, and to differentiate well among the three drug groups. CONCLUSIONS/SIGNIFICANCE: The original scales of the OAV were shown to be multidimensional constructs. Eleven new lower order scales were constructed and demonstrated to have desirable psychometric properties. The new lower order scales are most likely better suited to assess drug induced ASC

    Analogical Transfer in Multi-Attribute Decision Making

    Get PDF
    People often must make inferences in domains with limited information. In such cases, they can leverage their knowledge from other domains to make these inferences. This knowledge transfer process is quite common, but what are the underlying mechanisms that allow us to accomplish it? Analogical reasoning may be one such mechanism. This dissertation explores the role of analogy in influencing decision-making performance when faced with a new domain. We delve into the knowledge transferred between tasks and how this influences decision-making in novel tasks. Experiment I has two conditions, and each condition has two tasks. In one condition, the two task domains are analogically related, where for example, participants make inferences first about water flow and then about heat flow. In the second condition, the domains do not share obvious similarities. For example, car efficiency and water flow. Experiment I shows that participants presented with an analogy demonstrated better performance than those without. We hypothesize that this knowledge transfer occurs in two ways: firstly, analogical mapping enhances comprehension of cue utilization in a new task; secondly, the strategy employed is transferred. In Chapter 3, we developed a machine learning technique to uncover the strategies used by participants. Our findings reveal that the best-performing strategy from the old task is typically carried over to the new task. In Chapter 4, we developed a model of analogical transfer in multi-attribute decision making. We use the ACT-R theory of cognition as a framework to model knowledge transfer by integrating a reinforcement learning model of strategy selection with a model of analogy. The simulation results showcase a similar trend of both accuracy and strategy use to the behavioral data. Finally, we critically analyze our study\u27s limitations and outline promising directions for future research, thereby paving the way for a deeper understanding of knowledge transfer mechanisms

    Modifiability Of Strategy Use In Probabilistic Categorization By Rhesus Macaques (Macaca Mulatta) And Capuchin Monkeys (Cebus [Sapajus] Apella)

    Get PDF
    Humans and nonhuman animals categorize the natural world, and their behaviors can reveal how they use the stimulus information they encounter in service of these categorizations. Rigorous psychological study of categorization has offered many insights into the processes of categorization and their relative strengths and weaknesses across species. Probabilistic categorization, in which the relationships among stimulus information and category membership that are observed by an individual are fundamentally probabilistic, presents unique challenges both to the categorizer and to the psychologist attempting to model their behavior. Challenges notwithstanding, probabilistic categorization is an exceptionally ecologically relevant problem to human and nonhuman animal cognition alike. This dissertation reports the effects of many manipulations of theoretical interest on computer-trained rhesus macaques’ and capuchin monkeys’ inferred cognitive strategy use in a computerized version of a classic probabilistic categorization task. Experiment 1 probed cognitive strategy use across five variants of the same task in which the probability structure was constant, but the appearances and onscreen locations of cues and responses changed. Experiment 2 presented a series of manipulations of theoretical interest to the animals by changing the probability and reward structures of the task. Experiment 3 manipulated the stimuli of the task in ways motivated by findings across perceptual psychology literature. Experiment 4 extended the reward rate manipulations of Experiment 2 even further. Across four experiments, inferred strategy use was remarkably stable. Those animals that used cue-based strategies often returned to the same specific strategy experiment after experiment, as the cues, responses, probabilities, and contingencies changed around them. This finding is discussed in relation to questions of a real or functional ceiling on sophistication of strategy use, the robustness of cognitive individual differences in nonhuman primates, and future directions for comparative study of cognitive strategy use in probabilistic categorization

    Embedding Approaches for Relational Data

    Get PDF
    ​Embedding methods for searching latent representations of the data are very important tools for unsupervised and supervised machine learning as well as information visualisation. Over the years, such methods have continually progressed towards the ability to capture and analyse the structure and latent characteristics of larger and more complex data. In this thesis, we examine the problem of developing efficient and reliable embedding methods for revealing, understanding, and exploiting the different aspects of the relational data. We split our work into three pieces, where each deals with a different relational data structure. In the first part, we are handling with the weighted bipartite relational structure. Based on the relational measurements between two groups of heterogeneous objects, our goal is to generate low dimensional representations of these two different types of objects in a unified common space. We propose a novel method that models the embedding of each object type symmetrically to the other type, subject to flexible scale constraints and weighting parameters. The embedding generation relies on an efficient optimisation despatched using matrix decomposition. And we have also proposed a simple way of measuring the conformity between the original object relations and the ones re-estimated from the embeddings, in order to achieve model selection by identifying the optimal model parameters with a simple search procedure. We show that our proposed method achieves consistently better or on-par results on multiple synthetic datasets and real world ones from the text mining domain when compared with existing embedding generation approaches. In the second part of this thesis, we focus on the multi-relational data, where objects are interlinked by various relation types. Embedding approaches are very popular in this field, they typically encode objects and relation types with hidden representations and use the operations between them to compute the positive scalars corresponding to the linkages' likelihood score. In this work, we aim at further improving the existing embedding techniques by taking into account the multiple facets of the different patterns and behaviours of each relation type. To the best of our knowledge, this is the first latent representation model which considers relational representations to be dependent on the objects they relate in this field. The multi-modality of the relation type over different objects is effectively formulated as a projection matrix over the space spanned by the object vectors. Two large benchmark knowledge bases are used to evaluate the performance with respect to the link prediction task. And a new test data partition scheme is proposed to offer a better understanding of the behaviour of a link prediction model. In the last part of this thesis, a much more complex relational structure is considered. In particular, we aim at developing novel embedding methods for jointly modelling the linkage structure and objects' attributes. Traditionally, link prediction task is carried out on either the linkage structure or the objects' attributes, which does not aware of their semantic connections and is insufficient for handling the complex link prediction task. Thus, our goal in this work is to build a reliable model that can fuse both sources of information to improve the link prediction problem. The key idea of our approach is to encode both the linkage validities and the nodes neighbourhood information into embedding-based conditional probabilities. Another important aspect of our proposed algorithm is that we utilise a margin-based contrastive training process for encoding the linkage structure, which relies on a more appropriate assumption and dramatically reduces the number of training links. In the experiments, our proposed method indeed improves the link prediction performance on three citation/hyperlink datasets, when compared with those methods relying on only the nodes' attributes or the linkage structure, and it also achieves much better performances compared with the state-of-arts

    Improving multithreading performance for clustered VLIW architectures.

    Get PDF
    Very Long Instruction Word (VLIW) processors are very popular in embedded and mobile computing domain. Use of VLIW processors range from Digital Signal Processors (DSPs) found in a plethora of communication and multimedia devices to Graphics Processing Units (GPUs) used in gaming and high performance computing devices. The advantage of VLIWs is their low complexity and low power design which enable high performance at a low cost. Scalability of VLIWs is limited by the scalability of register file ports. It is not viable to have a VLIW processor with a single large register file because of area and power consumption implications of the register file. Clustered VLIW solve the register file scalability issue by partitioning the register file into multiple clusters and a set of functional units that are attached to register file of that cluster. Using a clustered approach, higher issue width can be achieved while keeping the cost of register file within reasonable limits. Several commercial VLIW processors have been designed using the clustered VLIW model. VLIW processors can be used to run a larger set of applications. Many of these applications have a good Lnstruction Level Parallelism (ILP) which can be efficiently utilized. However, several applications, specially the ones that are control code dominated do not exibit good ILP and the processor is underutilized. Cache misses is another major source of resource underutiliztion. Multithreading is a popular technique to improve processor utilization. Interleaved MultiThreading (IMT) hides cache miss latencies by scheduling a different thread each cycle but cannot hide unused instructions slots. Simultaneous MultiThread (SMT) can also remove ILP under-utilization by issuing multiple threads to fill the empty instruction slots. However, SMT has a higher implementation cost than IMT. The thesis presents Cluster-level Simultaneous MultiThreading (CSMT) that supports a limited form of SMT where VLIW instructions from different threads are merged at a cluster-level granularity. This lowers the hardware implementation cost to a level comparable to the cheap IMT technique. The more complex SMT combines VLIW instructions at the individual operation-level granularity which is quite expensive especially in for a mobile solution. We refer to SMT at operation-level as OpSMT to reduce ambiguity. While previous studies restricted OpSMT on a VLIW to 2 threads, CSMT has a better scalability and upto 8 threads can be supported at a reasonable cost. The thesis proposes several other techniques to further improve CSMT performance. In particular, Cluster renaming remaps the clusters used by instructions of different threads to reduce resource conflicts. Cluster renaming is quite effective in reducing the issue-slots under-utilization and significantly improves CSMT performance.The thesis also proposes: a hybrid between IMT and CSMT which increases the number of supported threads, heterogeneous instruction merging where some instructions are combined using SMT and CSMT rest, and finally, split-issue, a technique that allows to launch partially an instruction making it easier to be combined with others

    A Revision of Procedural Knowledge in the conML Framework

    Get PDF
    Machine learning methods have been used very successfully for quite some time to recognize patterns, model correlations and generate hypotheses. However, the possibilities for weighing and evaluating the resulting models and hypotheses, and the search for alternatives and contradictions are still predominantly reserved for humans. For this purpose, the novel concept of constructivist machine learning (conML) formalizes limitations of model validity and employs constructivist learning theory to enable doubting of new and existing models with the possibility of integrating, discarding, combining, and abstracting knowledge. The present work identifies issues that impede the systems capability to abstract knowledge from generated models for tasks that lie in the domain of procedural knowledge, and proposes and implements identified solutions. To this end, the conML framework has been reimplemented in the Julia programming language and subsequently been extended. Using a synthetic dataset of impedance spectra of modeled epithelia that has previously been analyzed with an existing implementation of conML, existing and new implementations are tested for consistency and proposed algorithmic changes are evaluated with respect to changes in model generation and abstraction ability when exploring unknown data. Recommendations for specific settings and suggestions for further research are derived from the results. In terms of performance, flexibility and extensibility, the new implementation of conML in Julia provides a good starting point for further research and application of the system.:Contents Abstract . . . . . III Zusammenfassung . . . . . IV Danksagung . . . . . V Selbstständigkeitserklärung . . . . . V 1. Introduction 1.1. Research Questions . . . . . 2 2. Related Work 2.1. Hybrid AI Systems . . . . . 5 2.2. Constructivist Machine Learning (conML) . . . . . 6 2.3. Implemented Methods . . . . . 9 2.3.1. Unsupervised Machine Learning . . . . . 9 2.3.2. Supervised Machine Learning . . . . . 11 2.3.3. Supervised Feature Selection . . . . . 13 2.3.4. Unsupervised Feature Selection . . . . . 17 3. Methods and Implementation 3.1. Notable Algorithmic Changes . . . . . 19 3.1.1. Rescaling of Target Values . . . . . 19 3.1.2. ExtendedWinner Selection . . . . . 21 3.2. Package Structure . . . . . 23 3.3. Interfaces and Implementation of Specific Methods . . . . . 29 3.4. Datasets . . . . . 41 4. Results 4.1. Validation Against the conML Prototype . . . . . 43 4.2. Change in Abstraction Capability . . . . . 49 4.2.1. Influence of Target Scaling . . . . . 49 4.2.2. Influence of the Parameter kappa_p . . . . . 55 4.2.3. Influence of the Winner Selection Procedure . . . . . 61 5. Discussion 5.1. Reproduction Results . . . . . 67 5.2. Rescaling of Constructed Targets . . . . . 69 5.3. kappa_p and the Selection of Winner Models . . . . . 71 6. Conclusions 6.1. Contributions of this Work . . . . . 77 6.2. Future Work . . . . . 78 A. Julia Language Reference . . . . . 81 B. Additional Code Listings . . . . . 91 C. Available Parameters . . . . . 99 C.1. Block Processing . . . . . 105 D. Configurations Reference . . . . . 107 D.1. Unsupervised Methods . . . . . 107 D.2. Supervised Methods . . . . . 108 D.3. Feature Selection . . . . . 109 D.4. Winner Selection . . . . . 110 D.5. General Settings . . . . . 110 E. Supplemental Figures . . . . . 113 E.1. Replacing MAPE with RMSE for Z-Transform Target Scaling . . . . . 113 E.2. Combining Target Rescaling, Winner Selection and High kappa_p . . . . . 119 Bibliography . . . . . 123 List of Figures . . . . . 129 List of Listings . . . . . 133 List of Tables . . . . . 135Maschinelle Lernverfahren werden seit geraumer Zeit sehr erfolgreich zum Erkennen von Mustern, Abbilden von Zusammenhängen und Generieren von Hypothesen eingesetzt. Die Möglichkeiten zum Abwägen und Bewerten der entstandenen Modelle und Hypothesen, und die Suche nach Alternativen und Widersprüchen sind jedoch noch überwiegend dem Menschen vorbehalten. Das neuartige Konzept des konstruktivistischen maschinellen Lernens (conML) formalisiert dazu die Grenzen der Gültigkeit von Modellen und ermöglicht mittels konstruktivistischer Lerntheorie ein Zweifeln über neue und bestehende Modelle mit der Möglichkeit zum Integrieren, Verwerfen, Kombinieren und Abstrahieren von Wissen. Die vorliegende Arbeit identifiziert Probleme, die die Abstraktionsfähigkeit des Systems bei Aufgabenstellungen in der Prozeduralen Wissensdomäne einschränken, bietet Lösungsvorschläge und beschreibt deren Umsetzung. Das algorithmische Framework conML ist dazu in der Programmiersprache Julia reimplementiert und anschließend erweitert worden. Anhand eines synthetischen Datensatzes von Impedanzspektren modellierter Epithelien, der bereits mit einem Prototypen des conML Systems analysiert worden ist, werden bestehende und neue Implementierung auf Konsistenz geprüft und die vorgeschlagenen algorithmischen Änderungen im Hinblick auf Veränderungen beim Erzeugen von Modellen und der Abstraktionsfähigkeit bei der Exploration unbekannter Daten untersucht. Aus den Ergebnissen werden Empfehlungen zu konkreten Einstellungen sowie Vorschläge für weitere Untersuchungen abgeleitet. Die neue Implementierung von conML in Julia bietet im Hinblick auf Performanz, Flexibilität und Erweiterbarkeit einen guten Ausgangspunkt für weitere Forschung und Anwendung des Systems.:Contents Abstract . . . . . III Zusammenfassung . . . . . IV Danksagung . . . . . V Selbstständigkeitserklärung . . . . . V 1. Introduction 1.1. Research Questions . . . . . 2 2. Related Work 2.1. Hybrid AI Systems . . . . . 5 2.2. Constructivist Machine Learning (conML) . . . . . 6 2.3. Implemented Methods . . . . . 9 2.3.1. Unsupervised Machine Learning . . . . . 9 2.3.2. Supervised Machine Learning . . . . . 11 2.3.3. Supervised Feature Selection . . . . . 13 2.3.4. Unsupervised Feature Selection . . . . . 17 3. Methods and Implementation 3.1. Notable Algorithmic Changes . . . . . 19 3.1.1. Rescaling of Target Values . . . . . 19 3.1.2. ExtendedWinner Selection . . . . . 21 3.2. Package Structure . . . . . 23 3.3. Interfaces and Implementation of Specific Methods . . . . . 29 3.4. Datasets . . . . . 41 4. Results 4.1. Validation Against the conML Prototype . . . . . 43 4.2. Change in Abstraction Capability . . . . . 49 4.2.1. Influence of Target Scaling . . . . . 49 4.2.2. Influence of the Parameter kappa_p . . . . . 55 4.2.3. Influence of the Winner Selection Procedure . . . . . 61 5. Discussion 5.1. Reproduction Results . . . . . 67 5.2. Rescaling of Constructed Targets . . . . . 69 5.3. kappa_p and the Selection of Winner Models . . . . . 71 6. Conclusions 6.1. Contributions of this Work . . . . . 77 6.2. Future Work . . . . . 78 A. Julia Language Reference . . . . . 81 B. Additional Code Listings . . . . . 91 C. Available Parameters . . . . . 99 C.1. Block Processing . . . . . 105 D. Configurations Reference . . . . . 107 D.1. Unsupervised Methods . . . . . 107 D.2. Supervised Methods . . . . . 108 D.3. Feature Selection . . . . . 109 D.4. Winner Selection . . . . . 110 D.5. General Settings . . . . . 110 E. Supplemental Figures . . . . . 113 E.1. Replacing MAPE with RMSE for Z-Transform Target Scaling . . . . . 113 E.2. Combining Target Rescaling, Winner Selection and High kappa_p . . . . . 119 Bibliography . . . . . 123 List of Figures . . . . . 129 List of Listings . . . . . 133 List of Tables . . . . . 13
    corecore