361 research outputs found

    Learning preferences for personalisation in a pervasive environment

    Get PDF
    With ever increasing accessibility to technological devices, services and applications there is also an increasing burden on the end user to manage and configure such resources. This burden will continue to increase as the vision of pervasive environments, with ubiquitous access to a plethora of resources, continues to become a reality. It is key that appropriate mechanisms to relieve the user of such burdens are developed and provided. These mechanisms include personalisation systems that can adapt resources on behalf of the user in an appropriate way based on the user's current context and goals. The key knowledge base of many personalisation systems is the set of user preferences that indicate what adaptations should be performed under which contextual situations. This thesis investigates the challenges of developing a system that can learn such preferences by monitoring user behaviour within a pervasive environment. Based on the findings of related works and experience from EU project research, several key design requirements for such a system are identified. These requirements are used to drive the design of a system that can learn accurate and up to date preferences for personalisation in a pervasive environment. A standalone prototype of the preference learning system has been developed. In addition the preference learning system has been integrated into a pervasive platform developed through an EU research project. The preference learning system is fully evaluated in terms of its machine learning performance and also its utility in a pervasive environment with real end users

    Modelling causality in law = Modélisation de la causalité en droit

    Full text link
    L'intérêt en apprentissage machine pour étudier la causalité s'est considérablement accru ces dernières années. Cette approche est cependant encore peu répandue dans le domaine de l’intelligence artificielle (IA) et du droit. Elle devrait l'être. L'approche associative actuelle d’apprentissage machine révèle certaines limites que l'analyse causale peut surmonter. Cette thèse vise à découvrir si les modèles causaux peuvent être utilisés en IA et droit. Nous procédons à une brève revue sur le raisonnement et la causalité en science et en droit. Traditionnellement, les cadres normatifs du raisonnement étaient la logique et la rationalité, mais la théorie duale démontre que la prise de décision humaine dépend de nombreux facteurs qui défient la rationalité. À ce titre, des statistiques et des probabilités étaient nécessaires pour améliorer la prédiction des résultats décisionnels. En droit, les cadres de causalité ont été définis par des décisions historiques, mais la plupart des modèles d’aujourd’hui de l'IA et droit n'impliquent pas d'analyse causale. Nous fournissons un bref résumé de ces modèles, puis appliquons le langage structurel de Judea Pearl et les définitions Halpern-Pearl de la causalité pour modéliser quelques décisions juridiques canadiennes qui impliquent la causalité. Les résultats suggèrent qu'il est non seulement possible d'utiliser des modèles de causalité formels pour décrire les décisions juridiques, mais également utile car un schéma uniforme élimine l'ambiguïté. De plus, les cadres de causalité sont utiles pour promouvoir la responsabilisation et minimiser les biais.The machine learning community’s interest in causality has significantly increased in recent years. This trend has not yet been made popular in AI & Law. It should be because the current associative ML approach reveals certain limitations that causal analysis may overcome. This research paper aims to discover whether formal causal frameworks can be used in AI & Law. We proceed with a brief account of scholarship on reasoning and causality in science and in law. Traditionally, normative frameworks for reasoning have been logic and rationality, but the dual theory has shown that human decision-making depends on many factors that defy rationality. As such, statistics and probability were called for to improve the prediction of decisional outcomes. In law, causal frameworks have been defined by landmark decisions but most of the AI & Law models today do not involve causal analysis. We provide a brief summary of these models and then attempt to apply Judea Pearl’s structural language and the Halpern-Pearl definitions of actual causality to model a few Canadian legal decisions that involve causality. Results suggest that it is not only possible to use formal causal models to describe legal decisions, but also useful because a uniform schema eliminates ambiguity. Also, causal frameworks are helpful in promoting accountability and minimizing biases

    Data Mining Framework for Monitoring Attacks In Power Systems

    Get PDF
    Vast deployment of Wide Area Measurement Systems (WAMS) has facilitated in increased understanding and intelligent management of the current complex power systems. Phasor Measurement Units (PMU\u27s), being the integral part of WAMS transmit high quality system information to the control centers every second. With the North American Synchro Phasor Initiative (NAPSI), the number of PMUs deployed across the system has been growing rapidly. With this increase in the number of PMU units, the amount of data accumulated is also growing in a tremendous manner. This increase in the data necessitates the use of sophisticated data processing, data reduction, data analysis and data mining techniques. WAMS is also closely associated with the information and communication technologies that are capable of implementing intelligent protection and control actions in order to improve the reliability and efficiency of the existing power systems. Along with the myriad of advantages that these measurements systems, informational and communication technologies bring, they also lead to a close synergy between heterogeneous physical and cyber components which unlocked access points for easy cyber intrusions. This easy access has resulted in various cyber attacks on control equipment consequently increasing the vulnerability of the power systems.;This research proposes a data mining based methodology that is capable of identifying attacks in the system using the real time data. The proposed methodology employs an online clustering technique to monitor only limited number of measuring units (PMU\u27s) deployed across the system. Two different classification algorithms are implemented to detect the occurrence of attacks along with its location. This research also proposes a methodology to differentiate physical attacks with malicious data attacks and declare attack severity and criticality. The proposed methodology is implemented on IEEE 24 Bus reliability Test System using data generated for attacks at different locations, under different system topologies and operating conditions. Different cross validation studies are performed to determine all the user defined variables involved in data mining studies. The performance of the proposed methodology is completely analyzed and results are demonstrated. Finally the strengths and limitations of the proposed approach are discussed

    Neural mechanisms for reducing uncertainty in 3D depth perception

    Get PDF
    In order to navigate and interact within their environment, animals must process and interpret sensory information to generate a representation or ‘percept’ of that environment. However, sensory information is invariably noisy, ambiguous, or incomplete due to the constraints of sensory apparatus, and this leads to uncertainty in perceptual interpretation. To overcome these problems, sensory systems have evolved multiple strategies for reducing perceptual uncertainty in the face of uncertain visual input, thus optimizing goal-oriented behaviours. Two available strategies have been observed even in the simplest of neural systems, and are represented in Bayesian formulations of perceptual inference: sensory integration and prior experience. In this thesis, I present a series of studies that examine these processes and the neural mechanisms underlying them in the primate visual system, by studying depth perception in human observers. Chapters 2 & 3 used functional brain imaging to localize cortical areas involved in integrating multiple visual depth cues, which enhance observers’ ability to judge depth. Specifically, we tested which of two possible computational methods the brain uses to combine depth cues. Based on the results we applied disruption techniques to examine whether these select brain regions are critical for depth cue integration. Chapters 4 & 5 addressed the question of how memory systems operating over different time scales interact to resolve perceptual ambiguity when the retinal signal is compatible with more than one 3D interpretation of the world. Finally, we examined the role of higher cortical regions (parietal cortex) in depth perception and the resolution of ambiguous visual input by testing patients with brain lesions

    Belief augmented frames

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Computational intelligence based complex adaptive system-of-systems architecture evolution strategy

    Get PDF
    The dynamic planning for a system-of-systems (SoS) is a challenging endeavor. Large scale organizations and operations constantly face challenges to incorporate new systems and upgrade existing systems over a period of time under threats, constrained budget and uncertainty. It is therefore necessary for the program managers to be able to look at the future scenarios and critically assess the impact of technology and stakeholder changes. Managers and engineers are always looking for options that signify affordable acquisition selections and lessen the cycle time for early acquisition and new technology addition. This research helps in analyzing sequential decisions in an evolving SoS architecture based on the wave model through three key features namely; meta-architecture generation, architecture assessment and architecture implementation. Meta-architectures are generated using evolutionary algorithms and assessed using type II fuzzy nets. The approach can accommodate diverse stakeholder views and convert them to key performance parameters (KPP) and use them for architecture assessment. On the other hand, it is not possible to implement such architecture without persuading the systems to participate into the meta-architecture. To address this issue a negotiation model is proposed which helps the SoS manger to adapt his strategy based on system owners behavior. This work helps in capturing the varied differences in the resources required by systems to prepare for participation. The viewpoints of multiple stakeholders are aggregated to assess the overall mission effectiveness of the overarching objective. An SAR SoS example problem illustrates application of the method. Also a dynamic programing approach can be used for generating meta-architectures based on the wave model. --Abstract, page iii

    A framework for trend mining with application to medical data

    Get PDF
    This thesis presents research work conducted in the field of knowledge discovery. It presents an integrated trend-mining framework and SOMA, which is the application of the trend-mining framework in diabetic retinopathy data. Trend mining is the process of identifying and analysing trends in the context of the variation of support of the association/classification rules that have been extracted from longitudinal datasets. The integrated framework concerns all major processes from data preparation to the extraction of knowledge. At the pre-process stage, data are cleaned, transformed if necessary, and sorted into time-stamped datasets using logic rules. At the next stage, time-stamp datasets are passed through the main processing, in which the ARM technique of matrix algorithm is applied to identify frequent rules with acceptable confidence. Mathematical conditions are applied to classify the sequences of support values into trends. Afterwards, interestingness criteria are applied to obtain interesting knowledge, and a visualization technique is proposed that maps how objects are moving from the previous to the next time stamp. A validation and verification (external and internal validation) framework is described that aims to ensure that the results at the intermediate stages of the framework are correct and that the framework as a whole can yield results that demonstrate causality. To evaluate the thesis, SOMA was developed. The dataset is, in itself, also of interest, as it is very noisy (in common with other similar medical datasets) and does not feature a clear association between specific time stamps and subsets of the data. The Royal Liverpool University Hospital has been a major centre for retinopathy research since 1991. Retinopathy is a generic term used to describe damage to the retina of the eye, which can, in the long term, lead to visual loss. Diabetic retinopathy is used to evaluate the framework, to determine whether SOMA can extract knowledge that is already known to the medics. The results show that those datasets can be used to extract knowledge that can show causality between patients’ characteristics such as the age of patient at diagnosis, type of diabetes, duration of diabetes, and diabetic retinopathy

    Neuroeconomics: How Neuroscience Can Inform Economics

    Get PDF
    Neuroeconomics uses knowledge about brain mechanisms to inform economic analysis, and roots economics in biology. It opens up the "black box" of the brain, much as organizational economics adds detail to the theory of the firm. Neuroscientists use many tools— including brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity. The key insight for economics is that the brain is composed of multiple systems which interact. Controlled systems ("executive function") interrupt automatic ones. Emotions and cognition both guide decisions. Just as prices and allocations emerge from the interaction of two processes—supply and demand— individual decisions can be modeled as the result of two (or more) processes interacting. Indeed, "dual-process" models of this sort are better rooted in neuroscientific fact, and more empirically accurate, than single-process models (such as utility-maximization). We discuss how brain evidence complicates standard assumptions about basic preference, to include homeostasis and other kinds of state-dependence. We also discuss applications to intertemporal choice, risk and decision making, and game theory. Intertemporal choice appears to be domain-specific and heavily influenced by emotion. The simplified ß-d of quasi-hyperbolic discounting is supported by activation in distinct regions of limbic and cortical systems. In risky decision, imaging data tentatively support the idea that gains and losses are coded separately, and that ambiguity is distinct from risk, because it activates fear and discomfort regions. (Ironically, lesion patients who do not receive fear signals in prefrontal cortex are "rationally" neutral toward ambiguity.) Game theory studies show the effect of brain regions implicated in "theory of mind", correlates of strategic skill, and effects of hormones and other biological variables. Finally, economics can contribute to neuroscience because simple rational-choice models are useful for understanding highly-evolved behavior like motor actions that earn rewards, and Bayesian integration of sensorimotor information

    Input and Intake in Language Acquisition

    Get PDF
    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from the intake encoded by the learner, and looking at how a statistical inference mechanism, coupled with a well defined linguistic hypothesis space could lead a learn to infer the native grammar of their native language. This work draws on experimental work, corpus analyses and computational models of Tsez, Norwegian and English children acquiring word meanings, word classes and syntax to highlight the need for an appropriate encoding of the linguistic input in order to solve any given problem in language acquisition
    corecore