1,902 research outputs found

    D04.05 - Feasibility mock-ups of feedback schedulers

    Get PDF
    Control and computation co-design deals with the interaction between feedback control laws design and their implementation on a real execution resource. Control design is often carried out in the framework of continuous time, or under the assumption of ideal sampling with equidistant intervals and known delays. Implementation on a real-time execution platform introduces many timing uncertainties and distortions to the ideal timing scheme, e.g. due to variable computation durations, complex preemption patterns between concurrent activities, uncertain network induced communication delays or occasional data loss. Analyzing, prototyping, simulating and guaranteeing the safety of complex control systems are very challenging topics. Models are needed for the mechatronic continuous system, for the discrete controllers and diagnosers, and for network behavior. Real-time properties (task response times) and the network Quality of Service (QoS) influence the controlled system properties (Quality of Control, QoC). To reach effective and safe systems it is not enough to provide theoretic control laws and leave programmers and real-time systems engineers just do their best to implement the controllers. This report first describes, through the detailed design of a quadrotor drone controller, the main features of {\sc Orccad}, an integrated development environment aimed to bridge the gap between advanced control design and real-time implementation. Besides control design and implementation, a real-time (hardware-in-the-loop) simulation has been designed to assess the control design with a simulated target rather than with the real plant. Using this HIL structure, several experiments using flexible real-time control features are reported, namely Kalman filters subject to data loss, control under (m,k)-firm constraints, control with varying sampling rates and feedback scheduling using the MPC approach

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Energy-Aware Competitive Power Allocation for Heterogeneous Networks Under QoS Constraints

    Get PDF
    This work proposes a distributed power allocation scheme for maximizing energy efficiency in the uplink of orthogonal frequency-division multiple access (OFDMA)-based heterogeneous networks (HetNets). The user equipment (UEs) in the network are modeled as rational agents that engage in a non-cooperative game where each UE allocates its available transmit power over the set of assigned subcarriers so as to maximize its individual utility (defined as the user's throughput per Watt of transmit power) subject to minimum-rate constraints. In this framework, the relevant solution concept is that of Debreu equilibrium, a generalization of Nash equilibrium which accounts for the case where an agent's set of possible actions depends on the actions of its opponents. Since the problem at hand might not be feasible, Debreu equilibria do not always exist. However, using techniques from fractional programming, we provide a characterization of equilibrial power allocation profiles when they do exist. In particular, Debreu equilibria are found to be the fixed points of a water-filling best response operator whose water level is a function of minimum rate constraints and circuit power. Moreover, we also describe a set of sufficient conditions for the existence and uniqueness of Debreu equilibria exploiting the contraction properties of the best response operator. This analysis provides the necessary tools to derive a power allocation scheme that steers the network to equilibrium in an iterative and distributed manner without the need for any centralized processing. Numerical simulations are then used to validate the analysis and assess the performance of the proposed algorithm as a function of the system parameters.Comment: 37 pages, 12 figures, to appear IEEE Trans. Wireless Commu

    Control Theory in Engineering

    Get PDF
    The subject matter of this book ranges from new control design methods to control theory applications in electrical and mechanical engineering and computers. The book covers certain aspects of control theory, including new methodologies, techniques, and applications. It promotes control theory in practical applications of these engineering domains and shows the way to disseminate researchers’ contributions in the field. This project presents applications that improve the properties and performance of control systems in analysis and design using a higher technical level of scientific attainment. The authors have included worked examples and case studies resulting from their research in the field. Readers will benefit from new solutions and answers to questions related to the emerging realm of control theory in engineering applications and its implementation

    Command-and-control (C2) theory : a challenge to control science

    Get PDF
    by Michael Athans

    Center for Aeronautics and Space Information Sciences

    Get PDF
    This report summarizes the research done during 1991/92 under the Center for Aeronautics and Space Information Science (CASIS) program. The topics covered are computer architecture, networking, and neural nets

    StochKit-FF: Efficient Systems Biology on Multicore Architectures

    Full text link
    The stochastic modelling of biological systems is an informative, and in some cases, very adequate technique, which may however result in being more expensive than other modelling approaches, such as differential equations. We present StochKit-FF, a parallel version of StochKit, a reference toolkit for stochastic simulations. StochKit-FF is based on the FastFlow programming toolkit for multicores and exploits the novel concept of selective memory. We experiment StochKit-FF on a model of HIV infection dynamics, with the aim of extracting information from efficiently run experiments, here in terms of average and variance and, on a longer term, of more structured data.Comment: 14 pages + cover pag

    A Digital Repository and Execution Platform for Interactive Scholarly Publications in Neuroscience

    Get PDF
    The CARMEN Virtual Laboratory (VL) is a cloud-based platform which allows neuroscientists to store, share, develop, execute, reproduce and publicise their work. This paper describes new functionality in the CARMEN VL: an interactive publications repository. This new facility allows users to link data and software to publications. This enables other users to examine data and software associated with the publication and execute the associated software within the VL using the same data as the authors used in the publication. The cloud-based architecture and SaaS (Software as a Service) framework allows vast data sets to be uploaded and analysed using software services. Thus, this new interactive publications facility allows others to build on research results through reuse. This aligns with recent developments by funding agencies, institutions, and publishers with a move to open access research. Open access provides reproducibility and verification of research resources and results. Publications and their associated data and software will be assured of long-term preservation and curation in the repository. Further, analysing research data and the evaluations described in publications frequently requires a number of execution stages many of which are iterative. The VL provides a scientific workflow environment to combine software services into a processing tree. These workflows can also be associated with publications and executed by users. The VL also provides a secure environment where users can decide the access rights for each resource to ensure copyright and privacy restrictions are met
    • …
    corecore