108,022 research outputs found

    An automated wrapper-based approach to the design of dependable software

    Get PDF
    The design of dependable software systems invariably comprises two main activities: (i) the design of dependability mechanisms, and (ii) the location of dependability mechanisms. It has been shown that these activities are intrinsically difficult. In this paper we propose an automated wrapper-based methodology to circumvent the problems associated with the design and location of dependability mechanisms. To achieve this we replicate important variables so that they can be used as part of standard, efficient dependability mechanisms. These well-understood mechanisms are then deployed in all relevant locations. To validate the proposed methodology we apply it to three complex software systems, evaluating the dependability enhancement and execution overhead in each case. The results generated demonstrate that the system failure rate of a wrapped software system can be several orders of magnitude lower than that of an unwrapped equivalent

    A distributed programming environment for Ada

    Get PDF
    Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level

    Nonlinear mechanisms in passive microwave devices

    Get PDF
    Premi extraordinari doctorat curs 2010-2011, àmbit d’Enginyeria de les TICThe telecommunications industry follows a tendency towards smaller devices, higher power and higher frequency, which imply an increase on the complexity of the electronics involved. Moreover, there is a need for extended capabilities like frequency tunable devices, ultra-low losses or high power handling, which make use of advanced materials for these purposes. In addition, increasingly demanding communication standards and regulations push the limits of the acceptable performance degrading indicators. This is the case of nonlinearities, whose effects, like increased Adjacent Channel Power Ratio (ACPR), harmonics, or intermodulation distortion among others, are being included in the performance requirements, as maximum tolerable levels. In this context, proper modeling of the devices at the design stage is of crucial importance in predicting not only the device performance but also the global system indicators and to make sure that the requirements are fulfilled. In accordance with that, this work proposes the necessary steps for circuit models implementation of different passive microwave devices, from the linear and nonlinear measurements to the simulations to validate them. Bulk acoustic wave resonators and transmission lines made of high temperature superconductors, ferroelectrics or regular metals and dielectrics are the subject of this work. Both phenomenological and physical approaches are considered and circuit models are proposed and compared with measurements. The nonlinear observables, being harmonics, intermodulation distortion, and saturation or detuning, are properly related to the material properties that originate them. The obtained models can be used in circuit simulators to predict the performance of these microwave devices under complex modulated signals, or even be used to predict their performance when integrated into more complex systems. A key step to achieve this goal is an accurate characterization of materials and devices, which is faced by making use of advanced measurement techniques. Therefore, considerations on special measurement setups are being made along this thesis.Award-winningPostprint (published version

    ADEPT2 - Next Generation Process Management Technology

    Get PDF
    If current process management systems shall be applied to a broad spectrum of applications, they will have to be significantly improved with respect to their technological capabilities. In particular, in dynamic environments it must be possible to quickly implement and deploy new processes, to enable ad-hoc modifications of single process instances at runtime (e.g., to add, delete or shift process steps), and to support process schema evolution with instance migration, i.e., to propagate process schema changes to already running instances. These requirements must be met without affecting process consistency and by preserving the robustness of the process management system. In this paper we describe how these challenges have been addressed and solved in the ADEPT2 Process Management System. Our overall vision is to provide a next generation process management technology which can be used in a variety of application domains

    Gaussian Belief with dynamic data and in dynamic network

    Full text link
    In this paper we analyse Belief Propagation over a Gaussian model in a dynamic environment. Recently, this has been proposed as a method to average local measurement values by a distributed protocol ("Consensus Propagation", Moallemi & Van Roy, 2006), where the average is available for read-out at every single node. In the case that the underlying network is constant but the values to be averaged fluctuate ("dynamic data"), convergence and accuracy are determined by the spectral properties of an associated Ruelle-Perron-Frobenius operator. For Gaussian models on Erdos-Renyi graphs, numerical computation points to a spectral gap remaining in the large-size limit, implying exceptionally good scalability. In a model where the underlying network also fluctuates ("dynamic network"), averaging is more effective than in the dynamic data case. Altogether, this implies very good performance of these methods in very large systems, and opens a new field of statistical physics of large (and dynamic) information systems.Comment: 5 pages, 7 figure
    corecore