1,306 research outputs found

    Fault-tolerant observer design with a tolerance measure for systems with sensor failure

    Get PDF
    A fault-tolerant switching observer design methodology is proposed. The aim is to maintain a desired level of closed-loop performance under a range of sensor fault scenarios while the fault-free nominal performance is optimized. The range of considered fault scenarios is determined by a minimum number p of assumed working sensors. Thus the smaller p is, the more fault tolerant is the observer. This is then used to define a fault tolerance measure for observer design. Due to the combinatorial nature of the problem, a semidefinite relaxation procedure is proposed to deal with the large number of fault scenarios for systems that have many vulnerable sensors. The procedure results in a significant reduction in the number of constraints needed to solve the problem. Two numerical examples are presented to illustrate the effectiveness of the fault-tolerant observer design

    Predictive hypothesis identification

    Get PDF
    While statistics focusses on hypothesis testing and on estimating (properties of) the true sampling distribution, in machine learning the performance of learning algorithms on future data is the primary issue. In this paper we bridge the gap with a general principle (PHI) that identifies hypotheses with best predictive performance. This includes predictive point and interval estimation, simple and composite hypothesis testing, (mixture) model selection, and others as special cases. For concrete instantiations we will recover well-known methods, variations thereof, and new ones. PHI nicely justifies, reconciles, and blends (a reparametrization invariant variation of) MAP, ML, MDL, and moment estimation. One particular feature of PHI is that it can genuinely deal with nested hypotheses

    FTT:Power : A global model of the power sector with induced technological change and natural resource depletion

    Get PDF
    This work introduces a model of Future Technology Transformations for the power sector (FTT:Power), a representation of global power systems based on market competition, induced technological change (ITC) and natural resource use and depletion. It is the first component of a family of sectoral bottom-up models of future technology transformations, designed to be integrated into the global macroeconometric model E3MG. ITC occurs as a result of technological learning as given by cumulative investment and leads to highly nonlinear, irreversible and path dependent technological transitions. The model makes use of a dynamic coupled set of logistic differential equations. As opposed to traditional bottom-up energy models based on systems optimisation, logistic equations offer an appropriate treatment of the times and rates of change involved in sectoral technology transformations. Resource use and depletion are represented by local cost-supply curves, which give rise to different regional energy landscapes. The model is explored using two simple scenarios, a baseline and a mitigation case where the price of carbon is gradually increased. While a constant price of carbon leads to a stagnant system, mitigation produces successive technology transitions leading towards the gradual decarbonisation of the global power sector.This work was supported by the Three Guineas TrustSubmitted for publication to Energy Polic

    Methods for evaluating Decision Problems with Limited Information

    Get PDF
    LImited Memory Influence Diagrams (LIMIDs) are general models of decision problems for representing limited memory policies (Lauritzen and Nilsson (2001)). The evaluation of LIMIDs can be done by Single Policy Updating that produces a local maximum strategy in which no single policy modification can increase the expected utility. This paper examines the quality of the obtained local maximum strategy and proposes three different methods for evaluating LIMIDs. The first algorithm, Temporal Policy Updating, resembles Single Policy Updating. The second algorithm, Greedy Search, successively updates the policy that gives the highest expected utility improvement. The final algorithm, Simulating Annealing, differs from the two preceeding by allowing the search to take some downhill steps to escape a local maximum. A careful comparison of the algorithms is provided both in terms of the quality of the obtained strategies, and in terms of implementation of the algorithms including some considerations of the computational complexity
    corecore