100 research outputs found

    Deep Joint Source-Channel Coding for Wireless Image Transmission

    Get PDF
    We propose a joint source and channel coding (JSCC) technique for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols. We parameterize the encoder and decoder functions by two convolutional neural networks (CNNs), which are trained jointly, and can be considered as an autoencoder with a non-trainable layer in the middle that represents the noisy communication channel. Our results show that the proposed deep JSCC scheme outperforms digital transmission concatenating JPEG or JPEG2000 compression with a capacity achieving channel code at low signal-to-noise ratio (SNR) and channel bandwidth values in the presence of additive white Gaussian noise (AWGN). More strikingly, deep JSCC does not suffer from the “cliff effect,” and it provides a graceful performance degradation as the channel SNR varies with respect to the SNR value assumed during training. In the case of a slow Rayleigh fading channel, deep JSCC learns noise resilient coded representations and significantly outperforms separation-based digital communication at all SNR and channel bandwidth values

    Advancing automation and robotics technology for the Space Station and for the US economy, volume 2

    Get PDF
    In response to Public Law 98-371, dated July 18, 1984, the NASA Advanced Technology Advisory Committee has studied automation and robotics for use in the Space Station. The Technical Report, Volume 2, provides background information on automation and robotics technologies and their potential and documents: the relevant aspects of Space Station design; representative examples of automation and robotics; applications; the state of the technology and advances needed; and considerations for technology transfer to U.S. industry and for space commercialization

    UAS Pilots Code – Annotated Version 1.0

    Get PDF
    The UAS PILOTS CODE (UASPC) offers recommendations to advance flight safety, ground safety, airmanship, and professionalism.6 It presents a vision of excellence for UAS pilots and operators, and includes general guidance for all types of UAS. The UASPC offers broad guidance—a set of values—to help a pilot interpret and apply standards and regulations, and to confront real world challenges to avoid incidents and accidents. It is designed to help UAS pilots develop standard operating procedures (SOPs), effective risk management,7 safety management systems (SMS), and to encourage UAS pilots to consider themselves aviators and participants in the broader aviation community

    QoS-aware Resource-utilisation Self-adaptive (QRS) Framework for Distributed Data Stream Management Systems

    Get PDF
    The last decade witnessed a vast number of Big Data applications in the science and industry fields alike. Such applications generate large amounts of streaming data and real-time event-based information. Such data needs to be analysed under the specific quality of service constraints, which must be done within extremely low latencies. Many distributed data stream processing approaches are based on the best-effort QoS principle that lack the capability of dynamic adaptation to the fluctuations in data input rates. Most of the proposed solutions tend to either drop some of the input data (load shedding) or degrade the level of QoS provided by the system. Another approach is to limit the data ingestion input rate using techniques like backpressure heartbeats, which can affect the worker nodes that causes an output delay. Such approaches are not suitable to handle certain types of mission-critical applications such as critical infrastructure surveillance, monitoring and signalling, vital health care monitoring, and military command and control streaming applications. This research presents a novel QoS-aware, Resource-utilisation Self-adaptive (QRS) Framework for managing data stream processing systems. The framework proposes a comprehensive usage model that encompasses proactive operations followed by simultaneous prompt actions. The simultaneous prompt actions instantly collect and analyse the performance and QoS metrics along with running data streams, ensuring that data does not lose its current values, whereas the proactive operations construct the prediction model that anticipate QoS violations and performance degradation in the system. The model triggers essential decision process for dynamic tuning of resources or adapting a new scheduling strategy. A proof of concept model was built that accurately represents the working conditions of the distributed data stream management ecosystem. The proposed framework is validated and verified. The framework’s several components were fully implemented over the emerging and prevalent distributed data streaming processing system, Apache Storm. The framework performs accurate prediction up to 81% about the system’s capacity to handle data load and input rate. The accuracy reaches up to 100% by incorporating abnormal detection techniques. Moreover, the framework performs well compared with the default round-robin and resource-aware schedulers within Storm. It provides a better ability to handle high data rates by re-balancing the topology and re-scheduling resources based on the prediction models well ahead of any congestion or QoS degradation

    Online Markov Chain Learning for Quality of Service Engineering in Adaptive Computer Systems

    Get PDF
    Computer systems are increasingly used in applications where the consequences of failure vary from financial loss to loss of human life. As a result, significant research has focused on the model-based analysis and verification of the compliance of business-critical and security-critical computer systems with their requirements. Many of the formalisms proposed by this research target the analysis of quality-of-service (QoS) computer system properties such as reliability, performance and cost. However, the effectiveness of such analysis or verification depends on the accuracy of the QoS models they rely upon. Building accurate mathematical models for critical computer systems is a great challenge. This is particularly true for systems used in applications affected by frequent changes in workload, requirements and environment. In these scenarios, QoS models become obsolete unless they are continually updated to reflect the evolving behaviour of the analysed systems. This thesis introduces new techniques for learning the parameters and the structure of discrete-time Markov chains, a class of models that is widely used to establish key reliability, performance and other QoS properties of real-world systems. The new learning techniques use as input run-time observations of system events associated with costs/rewards and transitions between the states of a model. When the model structure is known, they continually update its state transition probabilities and costs/rewards in line with the observed variations in the behaviour of the system. In scenarios when the model structure is unknown, a Markov chain is synthesised from sequences of such observations. The two categories of learning techniques underpin the operation of a new toolset for the engineering of self-adaptive service-based systems, which was developed as part of this research. The thesis introduces this software engineering toolset, and demonstrates its effectiveness in a case study that involves the development of a prototype telehealth service-based system capable of continual self-verification

    Correct-By-Construction Fault-Tolerant Control

    Full text link
    Correct-by-construction control synthesis methods refer to a collection of model-based techniques to algorithmically generate controllers/strategies that make the systems satisfy some formal specifications. Such techniques attract much attention as they provide formal guarantees on the correctness of cyber-physical systems, where corner cases may arise due to the interaction among different modules. The controllers synthesized through such methods, however, may still malfunction due to faults, such as physical component failures and unexpected operating conditions, which lead to a sudden change of the system model. In these cases, we want to guarantee that the performance of the faulty system degrades gracefully, and hence achieve fault tolerance. This thesis is about 1) incorporating fault detection and detectability analysis algorithms in correct-by-construction control synthesis, 2) formalizing the graceful degradation specification for fault tolerant systems with temporal logic, and 3) developing algorithms to synthesize correct-by-construction controllers that achieve such graceful degradation, with possible delay in the fault detection. In particular, two sets of approaches from the temporal logic planning domain, i.e., abstraction-based synthesis and optimization-based path planning, are considered. First, for abstraction-based approaches, we propose a recursive algorithm to reduce the fault tolerant controller synthesis problem into multiple small synthesis problems with simpler specifications. Such recursive reduction leverages the structure of the fault propagation and hence avoids the high complexity of solving the problem monolithically as one general temporal logic game. Furthermore, by exploring the structural properties in the specifications, we show that, even when the fault is detected with delay, the problem can be solved by a similar recursive algorithm without constructing the belief space. Secondly, optimization-based path planning is considered. The proposed approach leverages the recently developed temporal logic encodings and state-of-art mixed integer programming (MIP) solvers. The novelty of this work is to enhance the open-loop strategy obtained through solving the MIP so that it can react contingently to faults and disturbance. Finally, the control synthesis techniques developed for discrete state systems is shown to be applicable to continuous states systems. This is demonstrated by fuel cell thermal management application. Particularly, to apply the abstraction-based synthesis methods to complex systems such as the fuel cell thermal management system, structural properties (e.g., mixed monotonicity) of the system are explored and leveraged to ease abstraction computation, and techniques are developed to improve the scalability of synthesis process whenever the system has a large number of control actions.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155031/1/yliren_1.pd

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    Generalized asset integrity games

    Get PDF
    Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries. This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score. Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning
    • …
    corecore