2,142 research outputs found

    Efficient feedback controllers for continuous-time quantum error correction

    Full text link
    We present an efficient approach to continuous-time quantum error correction that extends the low-dimensional quantum filtering methodology developed by van Handel and Mabuchi [quant-ph/0511221 (2005)] to include error recovery operations in the form of real-time quantum feedback. We expect this paradigm to be useful for systems in which error recovery operations cannot be applied instantaneously. While we could not find an exact low-dimensional filter that combined both continuous syndrome measurement and a feedback Hamiltonian appropriate for error recovery, we developed an approximate reduced-dimensional model to do so. Simulations of the five-qubit code subjected to the symmetric depolarizing channel suggests that error correction based on our approximate filter performs essentially identically to correction based on an exact quantum dynamical model

    Real-Time classification of various types of falls and activities of daily livings based on CNN LSTM network

    Get PDF
    In this research, two multiclass models have been developed and implemented, namely, a standard long-short-term memory (LSTM) model and a Convolutional neural network (CNN) combined with LSTM (CNN-LSTM) model. Both models operate on raw acceleration data stored in the Sisfall public dataset. These models have been trained using the TensorFlow framework to classify and recognize among ten different events: five separate falls and five activities of daily livings (ADLs). An accuracy of more than 96% has been reached in the first 200 epochs of the training process. Furthermore, a real-time prototype for recognizing falls and ADLs has been implemented and developed using the TensorFlow lite framework and Raspberry PI, which resulted in an acceptable performance

    Value and efficacy of transcranial direct current stimulation in the rehabilitation of neurocognitive disorders: A critical review since 2000.

    Get PDF
    open3siNon-invasive brain stimulation techniques, including transcranial direct current stimulation (t-DCS) have been used in the rehabilitation of cognitive function in a spectrum of neurological disorders. The present review outlines methodological communalities and differences of t-DCS procedures in neurocognitive rehabilitation. We consider the efficacy of tDCS for the management of specific cognitive deficits in four main neurological disorders by providing a critical analysis of recent studies that have used t-DCS to improve cognition in patients with Parkinson’s Disease, Alzheimer’s Disease, Hemi-spatial Neglect and Aphasia. The evidence from this innovative approach to cognitive rehabilitation suggests that tDCS can influence cognition. However, the results show a high variability between studies both on the methodological approach adopted and the cognitive functions aspects. The review also focuses both on methodological issues such as technical aspects of the stimulation ( electrodes position and dimension; current intensity; duration of protocol) and on the inclusion of appropriate assessment tools for cognition. A further aspect considered is the best timing to administer tDCS: before, during after cognitive rehabilitation. We conclude that more studies with shared methodology are needed to have a better understanding of the efficacy of tDCS as a new tool for rehabilitation of cognitive disorders in a range of neurological disordersopenCappon, D; Jahanshahi, M; Bisiacchi, PCappon, Davide; Jahanshahi, M; Bisiacchi, Patrizi

    A synthesis of logic and bio-inspired techniques in the design of dependable systems

    Get PDF
    Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization
    corecore