30,176 research outputs found

    Adaptive control with an expert system based supervisory level

    Get PDF
    Adaptive control is presently one of the methods available which may be used to control plants with poorly modelled dynamics or time varying dynamics. Although many variations of adaptive controllers exist, a common characteristic of all adaptive control schemes, is that input/output measurements from the plant are used to adjust a control law in an on-line fashion. Ideally the adjustment mechanism of the adaptive controller is able to learn enough about the dynamics of the plant from input/output measurements to effectively control the plant. In practice, problems such as measurement noise, controller saturation, and incorrect model order, to name a few, may prevent proper adjustment of the controller and poor performance or instability result. In this work we set out to avoid the inadequacies of procedurally implemented safety nets, by introducing a two level control scheme in which an expert system based 'supervisor' at the upper level provides all the safety net functions for an adaptive controller at the lower level. The expert system is based on a shell called IPEX, (Interactive Process EXpert), that we developed specifically for the diagnosis and treatment of dynamic systems. Some of the more important functions that the IPEX system provides are: (1) temporal reasoning; (2) planning of diagnostic activities; and (3) interactive diagnosis. Also, because knowledge and control logic are separate, the incorporation of new diagnostic and treatment knowledge is relatively simple. We note that the flexibility available in the system to express diagnostic and treatment knowledge, allows much greater functionality than could ever be reasonably expected from procedural implementations of safety nets. The remainder of this chapter is divided into three sections. In section 1.1 we give a detailed review of the literature in the area of supervisory systems for adaptive controllers. In particular, we describe the evolution of safety nets from simple ad hoc techniques, up to the use of expert systems for more advanced supervision capabilities

    Multiform Adaptive Robot Skill Learning from Humans

    Full text link
    Object manipulation is a basic element in everyday human lives. Robotic manipulation has progressed from maneuvering single-rigid-body objects with firm grasping to maneuvering soft objects and handling contact-rich actions. Meanwhile, technologies such as robot learning from demonstration have enabled humans to intuitively train robots. This paper discusses a new level of robotic learning-based manipulation. In contrast to the single form of learning from demonstration, we propose a multiform learning approach that integrates additional forms of skill acquisition, including adaptive learning from definition and evaluation. Moreover, going beyond state-of-the-art technologies of handling purely rigid or soft objects in a pseudo-static manner, our work allows robots to learn to handle partly rigid partly soft objects with time-critical skills and sophisticated contact control. Such capability of robotic manipulation offers a variety of new possibilities in human-robot interaction.Comment: Accepted to 2017 Dynamic Systems and Control Conference (DSCC), Tysons Corner, VA, October 11-1

    Joint strategy fictitious play with inertia for potential games

    Get PDF
    We consider multi-player repeated games involving a large number of players with large strategy spaces and enmeshed utility structures. In these ldquolarge-scalerdquo games, players are inherently faced with limitations in both their observational and computational capabilities. Accordingly, players in large-scale games need to make their decisions using algorithms that accommodate limitations in information gathering and processing. This disqualifies some of the well known decision making models such as ldquoFictitious Playrdquo (FP), in which each player must monitor the individual actions of every other player and must optimize over a high dimensional probability space. We will show that Joint Strategy Fictitious Play (JSFP), a close variant of FP, alleviates both the informational and computational burden of FP. Furthermore, we introduce JSFP with inertia, i.e., a probabilistic reluctance to change strategies, and establish the convergence to a pure Nash equilibrium in all generalized ordinal potential games in both cases of averaged or exponentially discounted historical data. We illustrate JSFP with inertia on the specific class of congestion games, a subset of generalized ordinal potential games. In particular, we illustrate the main results on a distributed traffic routing problem and derive tolling procedures that can lead to optimized total traffic congestion

    A two-band approach to nλ\lambda phase error corrections with LBTI's PHASECam

    Full text link
    PHASECam is the Large Binocular Telescope Interferometer's (LBTI) phase sensor, a near-infrared camera which is used to measure tip/tilt and phase variations between the two AO-corrected apertures of the Large Binocular Telescope (LBT). Tip/tilt and phase sensing are currently performed in the H (1.65 μ\mum) and K (2.2 μ\mum) bands at 1 kHz, and the K band phase telemetry is used to send tip/tilt and Optical Path Difference (OPD) corrections to the system. However, phase variations outside the range [-π\pi, π\pi] are not sensed, and thus are not fully corrected during closed-loop operation. PHASECam's phase unwrapping algorithm, which attempts to mitigate this issue, still occasionally fails in the case of fast, large phase variations. This can cause a fringe jump, in which case the unwrapped phase will be incorrect by a wavelength or more. This can currently be manually corrected by the observer, but this is inefficient. A more reliable and automated solution is desired, especially as the LBTI begins to commission further modes which require robust, active phase control, including controlled multi-axial (Fizeau) interferometry and dual-aperture non-redundant aperture masking interferometry. We present a multi-wavelength method of fringe jump capture and correction which involves direct comparison between the K band and currently unused H band phase telemetry.Comment: 17 pages, 10 figure

    The Structured Process Modeling Method (SPMM) : what is the best way for me to construct a process model?

    Get PDF
    More and more organizations turn to the construction of process models to support strategical and operational tasks. At the same time, reports indicate quality issues for a considerable part of these models, caused by modeling errors. Therefore, the research described in this paper investigates the development of a practical method to determine and train an optimal process modeling strategy that aims to decrease the number of cognitive errors made during modeling. Such cognitive errors originate in inadequate cognitive processing caused by the inherent complexity of constructing process models. The method helps modelers to derive their personal cognitive profile and the related optimal cognitive strategy that minimizes these cognitive failures. The contribution of the research consists of the conceptual method and an automated modeling strategy selection and training instrument. These two artefacts are positively evaluated by a laboratory experiment covering multiple modeling sessions and involving a total of 149 master students at Ghent University

    MARVEL: measured active rotational-vibrational energy levels

    Get PDF
    An algorithm is proposed, based principally on an earlier proposition of Flaud and co-workers [Mol. Phys. 32 (1976) 499], that inverts the information contained in uniquely assigned experimental rotational-vibrational transitions in order to obtain measured active rotational-vibrational energy levels (MARVEL). The procedure starts with collecting, critically evaluating, selecting, and compiling all available measured transitions, including assignments and uncertainties, into a single database. Then, spectroscopic networks (SN) are determined which contain all interconnecting rotational-vibrational energy levels supported by the grand database of the selected transitions. Adjustment of the uncertainties of the lines is performed next, with the help of a robust weighting strategy, until a self-consistent set of lines and uncertainties is achieved. Inversion of the transitions through a weighted least-squares-type procedure results in MARVEL energy levels and associated uncertainties. Local sensitivity coefficients could be computed for each energy level. The resulting set of MARVEL levels is called active as when new experimental measurements become available the same evaluation, adjustment, and inversion procedure should be repeated in order to obtain more dependable energy levels and uncertainties. MARVEL is tested on the example of the H-2 O-17 isotopologue of water and a list of 2736 dependable energy levels, based on 8369 transitions, has been obtained. (c) 2007 Elsevier Inc. All rights reserved

    Classification and reduction of pilot error

    Get PDF
    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses
    • …
    corecore