4,374 research outputs found

    Software redundancy: what, where, how

    Get PDF
    Software systems have become pervasive in everyday life and are the core component of many crucial activities. An inadequate level of reliability may determine the commercial failure of a software product. Still, despite the commitment and the rigorous verification processes employed by developers, software is deployed with faults. To increase the reliability of software systems, researchers have investigated the use of various form of redundancy. Informally, a software system is redundant when it performs the same functionality through the execution of different elements. Redundancy has been extensively exploited in many software engineering techniques, for example for fault-tolerance and reliability engineering, and in self-adaptive and self- healing programs. Despite the many uses, though, there is no formalization or study of software redundancy to support a proper and effective design of software. Our intuition is that a systematic and formal investigation of software redundancy will lead to more, and more effective uses of redundancy. This thesis develops this intuition and proposes a set of ways to characterize qualitatively as well as quantitatively redundancy. We first formalize the intuitive notion of redundancy whereby two code fragments are considered redundant when they perform the same functionality through different executions. On the basis of this abstract and general notion, we then develop a practical method to obtain a measure of software redundancy. We prove the effectiveness of our measure by showing that it distinguishes between shallow differences, where apparently different code fragments reduce to the same underlying code, and deep code differences, where the algorithmic nature of the computations differs. We also demonstrate that our measure is useful for developers, since it is a good predictor of the effectiveness of techniques that exploit redundancy. Besides formalizing the notion of redundancy, we investigate the pervasiveness of redundancy intrinsically found in modern software systems. Intrinsic redundancy is a form of redundancy that occurs as a by-product of modern design and development practices. We have observed that intrinsic redundancy is indeed present in software systems, and that it can be successfully exploited for good purposes. This thesis proposes a technique to automatically identify equivalent method sequences in software systems to help developers assess the presence of intrinsic redundancy. We demonstrate the effectiveness of the technique by showing that it identifies the majority of equivalent method sequences in a system with good precision and performance

    LIDA: A Working Model of Cognition

    Get PDF
    In this paper we present the LIDA architecture as a working model of cognition. We argue that such working models are broad in scope and address real world problems in comparison to experimentally based models which focus on specific pieces of cognition. While experimentally based models are useful, we need a working model of cognition that integrates what we know from neuroscience, cognitive science and AI. The LIDA architecture provides such a working model. A LIDA based cognitive robot or software agent will be capable of multiple learning mechanisms. With artificial feelings and emotions as primary motivators and learning facilitators, such systems will ‘live’ through a developmental period during which they will learn in multiple ways to act in an effective, human-like manner in complex, dynamic, and unpredictable environments. We discuss the integration of the learning mechanisms into the existing IDA architecture as a working model of cognition

    A half century of progress towards a unified neural theory of mind and brain with applications to autonomous adaptive agents and mental disorders

    Full text link
    Invited article for the book Artificial Intelligence in the Age of Neural Networks and Brain Computing R. Kozma, C. Alippi, Y. Choe, and F. C. Morabito, Eds. Cambridge, MA: Academic PressThis article surveys some of the main design principles, mechanisms, circuits, and architectures that have been discovered during a half century of systematic research aimed at developing a unified theory that links mind and brain, and shows how psychological functions arise as emergent properties of brain mechanisms. The article describes a theoretical method that has enabled such a theory to be developed in stages by carrying out a kind of conceptual evolution. It also describes revolutionary computational paradigms like Complementary Computing and Laminar Computing that constrain the kind of unified theory that can describe the autonomous adaptive intelligence that emerges from advanced brains. Adaptive Resonance Theory, or ART, is one of the core models that has been discovered in this way. ART proposes how advanced brains learn to attend, recognize, and predict objects and events in a changing world that is filled with unexpected events. ART is not, however, a “theory of everything” if only because, due to Complementary Computing, different matching and learning laws tend to support perception and cognition on the one hand, and spatial representation and action on the other. The article mentions why a theory of this kind may be useful in the design of autonomous adaptive agents in engineering and technology. It also notes how the theory has led to new mechanistic insights about mental disorders such as autism, medial temporal amnesia, Alzheimer’s disease, and schizophrenia, along with mechanistically informed proposals about how their symptoms may be ameliorated

    Integrating Constrained Experiments in Long-term Human-Robot Interaction using Task– and Scenario–based Prototyping

    Get PDF
    © 2015 The Author(s). Published with license by Taylor & Francis© Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, and Wan Ching Ho. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The moral rights of the named author(s) have been asserted. Permission is granted subject to the terms of the License under which the work was published. Please check the License conditions for the work which you wish to reuse. Full and appropriate attribution must be given. This permission does not cover any third party copyrighted material which may appear in the work requested.In order to investigate how the use of robots may impact everyday tasks, 12 participants interacted with a University of Hertfordshire Sunflower robot over a period of 8 weeks in the university’s Robot House.. Participants performed two constrained tasks, one physical and one cognitive , 4 times over this period. Participant responses were recorded using a variety of measures including the System Usability Scale and the NASA Task Load Index . The use of the robot had an impact on the experienced workload of the participants differently for the two tasks, and this effect changed over time. In the physical task, there was evidence of adaptation to the robot’s behaviour. For the cognitive task, the use of the robot was experienced as more frustrating in the later weeks.Peer reviewedFinal Published versio

    AMISEC: Leveraging Redundancy and Adaptability to Secure AmI Applications

    Get PDF
    Security in Ambient Intelligence (AmI) poses too many challenges due to the inherently insecure nature of wireless sensor nodes. However, there are two characteristics of these environments that can be used effectively to prevent, detect, and confine attacks: redundancy and continuous adaptation. In this article we propose a global strategy and a system architecture to cope with security issues in AmI applications at different levels. Unlike in previous approaches, we assume an individual wireless node is vulnerable. We present an agent-based architecture with supporting services that is proven to be adequate to detect and confine common attacks. Decisions at different levels are supported by a trust-based framework with good and bad reputation feedback while maintaining resistance to bad-mouthing attacks. We also propose a set of services that can be used to handle identification, authentication, and authorization in intelligent ambients. The resulting approach takes into account practical issues, such as resource limitation, bandwidth optimization, and scalability

    Context for Ubiquitous Data Management

    Get PDF
    In response to the advance of ubiquitous computing technologies, we believe that for computer systems to be ubiquitous, they must be context-aware. In this paper, we address the impact of context-awareness on ubiquitous data management. To do this, we overview different characteristics of context in order to develop a clear understanding of context, as well as its implications and requirements for context-aware data management. References to recent research activities and applicable techniques are also provided
    • …
    corecore