1,612 research outputs found

    Applying knowledge compilation techniques to model-based reasoning

    Get PDF
    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems

    Merge-and-Shrink Task Reformulation for Classical Planning

    Get PDF
    The performance of domain-independent planning systems heavily depends on how the planning task has been modeled. This makes task reformulation an important tool to get rid of unnecessary complexity and increase the robustness of planners with respect to the model chosen by the user. In this paper, we represent tasks as factored transition systems (FTS), and use the merge-and-shrink (M&S) framework for task reformulation for optimal and satisficing planning. We prove that the flexibility of the underlying representation makes the M&S reformulation methods more powerful than the counterparts based on the more popular finite-domain representation. We adapt delete-relaxation and M&S heuristics to work on the FTS representation and evaluate the impact of our reformulation

    Software tools for the cognitive development of autonomous robots

    Get PDF
    Robotic systems are evolving towards higher degrees of autonomy. This paper reviews the cognitive tools available nowadays for the fulfilment of abstract or long-term goals as well as for learning and modifying their behaviour.Peer ReviewedPostprint (author's final draft

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning

    Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semantics

    Get PDF
    This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks

    Enhancing workflow-nets with data for trace completion

    Full text link
    The growing adoption of IT-systems for modeling and executing (business) processes or services has thrust the scientific investigation towards techniques and tools which support more complex forms of process analysis. Many of them, such as conformance checking, process alignment, mining and enhancement, rely on complete observation of past (tracked and logged) executions. In many real cases, however, the lack of human or IT-support on all the steps of process execution, as well as information hiding and abstraction of model and data, result in incomplete log information of both data and activities. This paper tackles the issue of automatically repairing traces with missing information by notably considering not only activities but also data manipulated by them. Our technique recasts such a problem in a reachability problem and provides an encoding in an action language which allows to virtually use any state-of-the-art planning to return solutions

    Doctor of Philosophy

    Get PDF
    dissertationThe purpose of this constructivist grounded theory study was to identify and examine challenges and strategies used by people with parkinsonism to maintain identity. These concerns were explored within the context of daily life, vital relationships, and familiar roles. The setting was three Midwestern states during historic winter weather conditions (2013-2014). Illness descriptions were obtained through medication logs and two scales: Hoehn and Yahr staging and activities of daily living. Qualitative data consisted of 62 in-depth interviews, photos, videos, fieldnotes, and memos. Twenty-five volunteers (10 female/15 male; ages 40-95) with self-reported Parkinson disease participated. Range of disease duration was 3 months to 30 years. Disease staging: I (n = 0), II (n = 0), III (n = 14), IV (n = 8), and V (n = 3). Stage III participants completed daily living activities at an independence level of 60 to 80%, while stage V participants ranged from 20 to 30%. Twenty-one participants used carbidopa-levodopa. Analytic coding procedures generated the theory of Preserving self. This clinically logical 5-staged theory represents social and psychological processes for maintaining identity while living with a life-limiting illness. The stages and transitions are: (1) Making sense of symptoms describes noticing and taking action prediagnosis. Transition: Finding out the diagnosis was shocking, but time-limited. (2) Turning points confronted abilities with demanding tasks and strong emotions. Transition: Unsettling reminders of losses were perpetual. (3) Dilemmas of identity are the difficulties relinquishing comfortable self-attributes. Transition: Sifting and sorting is a time of grieving, letting go, and considering new self-identities. (4) Reconnecting the self synthesizes former and current identities. Transition: Balancing risks and rewards compares a lost past with possible futures. (5) Envisioning a future demonstrates planning pragmatically with tunnel vision. iv Creative methods were developed for maintaining independence; abilities were frequently overestimated. An interesting finding was the use of self-adjusted carbidopa-levodopa beginning during Sifting and sorting continuing through Reconnecting the self. Medication was used as a social prosthesis to function normally, maintain valued relationships, and roles. People with parkinsonisim desperately seek normalcy. Recommendations include medication instruction to bridge wearing-off effects and sensory integrative activities as a self-reconnecting technique

    Design and architecture of a stochastic programming modelling system

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Decision making under uncertainty is an important yet challenging task; a number of alternative paradigms which address this problem have been proposed. Stochastic Programming (SP) and Robust Optimization (RO) are two such modelling ap-proaches, which we consider; these are natural extensions of Mathematical Pro-gramming modelling. The process that goes from the conceptualization of an SP model to its solution and the use of the optimization results is complex in respect to its deterministic counterpart. Many factors contribute to this complexity: (i) the representation of the random behaviour of the model parameters, (ii) the interfac-ing of the decision model with the model of randomness, (iii) the difficulty in solving (very) large model instances, (iv) the requirements for result analysis and perfor-mance evaluation through simulation techniques. An overview of the software tools which support stochastic programming modelling is given, and a conceptual struc-ture and the architecture of such tools are presented. This conceptualization is pre-sented as various interacting modules, namely (i) scenario generators, (ii) model generators, (iii) solvers and (iv) performance evaluation. Reflecting this research, we have redesigned and extended an established modelling system to support modelling under uncertainty. The collective system which integrates these other-wise disparate set of model formulations within a common framework is innovative and makes the resulting system a powerful modelling tool. The introduction of sce-nario generation in the ex-ante decision model and the integration with simulation and evaluation for the purpose of ex-post analysis by the use of workflows is novel and makes a contribution to knowledge
    corecore