110 research outputs found

    Survivability in layered networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 195-204).In layered networks, a single failure at the lower (physical) layer may cause multiple failures at the upper (logical) layer. As a result, traditional schemes that protect against single failures may not be effective in layered networks. This thesis studies the problem of maximizing network survivability in the layered setting, with a focus on optimizing the embedding of the logical network onto the physical network. In the first part of the thesis, we start with an investigation of the fundamental properties of layered networks, and show that basic network connectivity structures, such as cuts, paths and spanning trees, exhibit fundamentally different characteristics from their single-layer counterparts. This leads to our development of a new crosslayer survivability metric that properly quantifies the resilience of the layered network against physical failures. Using this new metric, we design algorithms to embed the logical network onto the physical network based on multi-commodity flows, to maximize the cross-layer survivability. In the second part of the thesis, we extend our model to a random failure setting and study the cross-layer reliability of the networks, defined to be the probability that the upper layer network stays connected under the random failure events. We generalize the classical polynomial expression for network reliability to the layered setting. Using Monte-Carlo techniques, we develop efficient algorithms to compute an approximate polynomial expression for reliability, as a function of the link failure probability. The construction of the polynomial eliminates the need to resample when the cross-layer reliability under different link failure probabilities is assessed. Furthermore, the polynomial expression provides important insight into the connection between the link failure probability, the cross-layer reliability and the structure of a layered network. We show that in general the optimal embedding depends on the link failure probability, and characterize the properties of embeddings that maximize the reliability under different failure probability regimes. Based on these results, we propose new iterative approaches to improve the reliability of the layered networks. We demonstrate via extensive simulations that these new approaches result in embeddings with significantly higher reliability than existing algorithms.by Kayi Lee.Ph.D

    Static reliability and resilience in dynamic systems

    Get PDF
    Two systems are modeled in this thesis. First, we consider a multi-component stochastic monotone binary system, or SMBS for short. The reliability of an SMBS is the probability of correct operation. A statistical approximation of the system reliability is provided for these systems, inspired in Monte Carlo Methods. Then, we are focused on the diameter constrained reliability model (DCR), which was originally developed for delay sensitive applications over the Internet infrastructure. The computational complexity of the DCR is analyzed. Networks with an efficient (i.e., polynomial time) DCR computation are offered, termed Weak graphs. Second, we model the effect of a dynamic epidemic propagation. Our first approach is to develop a SIR-based simulation, where unrealistic assumptions for SIR model (infinite, homogeneous, fully-mixed population) are discarded. Finally, we formalize a stochastic rocess that counts infected individuals, and further investigate node-immunization strategies, subject to a budget nstraint. A combinatorial optimization problem is here introduced, called Graph Fragmentation Problem. There, the impact of a highly virulent epidemic propagation is analyzed, and we mathematically prove that Greedy heuristic is suboptimal

    Stochastic programs and their value over deterministic programs

    Get PDF
    A dissertation submitted to the Faculty of Arts, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Arts.Real-life decision-making problems can often be modelled by mathematical programs (or optimization models). It is common for there to be uncertainty about the parameters of such optimization models. Usually, this uncertainty is ignored and a simplified deterministic program is obtained. Stochastic programs take account of this uncertainty by including a probabilistic description of the uncertain parameters in the model. Stochastic programs are therefore more appropriate or valuable than deterministic programs in many situations, and this is emphasized throughout the dissertation. The dissertation contains a development of the theory of stochastic programming, and a number of illustrative examples are formulated and solved. As a real-life application, a stochastic model for the unit commitment problem facing Eskom (one of the world's largest producers of electricity) is formulated and solved, and the solution is compared with that of the current strategy employed by Eskom.AC 201

    Detection template families for gravitational waves from the final stages of binary--black-hole inspirals: Nonspinning case

    Get PDF
    We investigate the problem of detecting gravitational waves from binaries of nonspinning black holes with masses m = 5--20 Msun, moving on quasicircular orbits, which are arguably the most promising sources for first-generation ground-based detectors. We analyze and compare all the currently available post--Newtonian approximations for the relativistic two-body dynamics; for these binaries, different approximations predict different waveforms. We then construct examples of detection template families that embed all the approximate models, and that could be used to detect the true gravitational-wave signal (but not to characterize accurately its physical parameters). We estimate that the fitting factor for our detection families is >~0.95 (corresponding to an event-rate loss <~15%) and we estimate that the discretization of the template family, for ~10^4 templates, increases the loss to <~20%.Comment: 58 pages, 38 EPS figures, final PRD version; small corrections to GW flux terms as per Blanchet et al., PRD 71, 129902(E)-129904(E) (2005

    Interval and Possibilistic Methods for Constraint-Based Metabolic Models

    Full text link
    This thesis is devoted to the study and application of constraint-based metabolic models. The objective was to find simple ways to handle the difficulties that arise in practice due to uncertainty (knowledge is incomplete, there is a lack of measurable variables, and those available are imprecise). With this purpose, tools have been developed to model, analyse, estimate and predict the metabolic behaviour of cells. The document is structured in three parts. First, related literature is revised and summarised. This results in a unified perspective of several methodologies that use constraint-based representations of the cell metabolism. Three outstanding methods are discussed in detail, network-based pathways analysis (NPA), metabolic flux analysis (MFA), and flux balance analysis (FBA). Four types of metabolic pathways are also compared to clarify the subtle differences among them. The second part is devoted to interval methods for constraint-based models. The first contribution is an interval approach to traditional MFA, particularly useful to estimate the metabolic fluxes under data scarcity (FS-MFA). These estimates provide insight on the internal state of cells, which determines the behaviour they exhibit at given conditions. The second contribution is a procedure for monitoring the metabolic fluxes during a cultivation process that uses FS-MFA to handle uncertainty. The third part of the document addresses the use of possibility theory. The main contribution is a possibilistic framework to (a) evaluate model and measurements consistency, and (b) perform flux estimations (Poss-MFA). It combines flexibility on the assumptions and computational efficiency. Poss-MFA is also applied to monitoring fluxes and metabolite concentrations during a cultivation, information of great use for fault-detection and control of industrial processes. Afterwards, the FBA problem is addressed.Llaneras Estrada, F. (2011). Interval and Possibilistic Methods for Constraint-Based Metabolic Models [Tesis doctoral no publicada]. Universitat PolitĂšcnica de ValĂšncia. https://doi.org/10.4995/Thesis/10251/10528Palanci

    Safety system design optimisation

    Get PDF
    This thesis investigates the efficiency of a design optimisation scheme that is appropriate for systems which require a high likelihood of functioning on demand. Traditional approaches to the design of safety critical systems follow the preliminary design, analysis, appraisal and redesign stages until what is regarded as an acceptable design is achieved. For safety systems whose failure could result in loss of life it is imperative that the best use of the available resources is made and a system which is optimal, not just adequate, is produced. The object of the design optimisation problem is to minimise system unavailability through manipulation of the design variables, such that limitations placed on them by constraints are not violated. Commonly, with mathematical optimisation problem; there will be an explicit objective function which defines how the characteristic to be minimised is related to the variables. As regards the safety system problem, an explicit objective function cannot be formulated, and as such, system performance is assessed using the fault tree method. By the use of house events a single fault tree is constructed to represent the failure causes of each potential design to overcome the time consuming task of constructing a fault tree for each design investigated during the optimisation procedure. Once the fault tree has been constructed for the design in question it is converted to a BDD for analysis. A genetic algorithm is first employed to perform the system optimisation, where the practicality of this approach is demonstrated initially through application to a High-Integrity Protection System (HIPS) and subsequently a more complex Firewater Deluge System (FDS). An alternative optimisation scheme achieves the final design specification by solving a sequence of optimisation problems. Each of these problems are defined by assuming some form of the objective function and specifying a sub-region of the design space over which this function will be representative of the system unavailability. The thesis concludes with attention to various optimisation techniques, which possess features able to address difficulties in the optimisation of safety critical systems. Specifically, consideration is given to the use of a statistically designed experiment and a logical search approach

    Using probability density functions to analyze the effect of external threats on the reliability of a South African power grid

    Get PDF
    Includes bibliographical references.The implications of reliability based decisions are a vital component of the control and management of power systems. Network planners strive to achieve an optimum level of investments and reliability. Network operators on the other hand aim at mitigating the costs associated with low levels of reliability. Effective decision making requires the management of uncertainties in the process applied. Thus, the modelling of reliability inputs, methodology applied in assessing network reliability and the interpretation of the reliability outputs should be carefully considered in reliability analyses. This thesis applies probability density functions, as opposed to deterministic averages, to model component failures. The probabilistic models are derived from historical failure data that is usually confined to finite ranges. Thus, the Beta distribution which has the unique characteristic of being able to be rescaled to a different finite range is selected. The thesis presents a new reliability evaluation technique that is based on the sequential Monte Carlo simulation. The technique applies a time-dependent probabilistic modelling approach to network reliability parameters. The approach uses the Beta probability density functions to model stochastic network parameters while taking into account seasonal and time-of- day influences. While the modelling approach can be applied to different aspects such as intermittent power supply and system loading, it is applied in this thesis to model the failure and repair rates of network components. Unlike the conventional sequential Monte Carlo methods, the new technique does not require the derivation of an inverse translation function for the probability distribution applied. The conventional Monte Carlo technique simulates the up and down component states when building their chronological cycles. The new technique applied here focuses instead on simulating the down states of component chronological cycles. The simulation determines the number of down states, when they will occur and how long they will last before developing the chronological cycle. Tests performed on a published network show that focussing on the down states significantly improves the computation times of a sequential Monte Carlo simulation. Also, the reliability results of the new sequential Monte Carlo technique are more dependent on the input failure models than on the number of simulation runs or the stopping criterion applied to a simulation and in this respect gives results different from present standard approaches. The thesis also applies the new approach on a real bulk power network. The bulk network is part of the South African power grid. Thus, the network threats considered and the corresponding failure data collected are typical of the real South African conditions. The thesis shows that probability density functions are superior to deterministic average values when modelling reliability parameters. Probability density functions reflect the variability in reliability parameters through their dispersion and skewness. The time-dependent probabilistic approach is applied in both planning and operational reliability analyses. The component failure models developed show that variability in network parameters is different for planning and operational reliability analyses. The thesis shows how the modelling approach is used to translate long-term failure models into operational (short-term) failure models. DigSilent and MATLAB software packages are used to perform network stability and reliability simulations in this thesis. The reliability simulation results of the time-dependent probabilistic approach show that the perception on a network's reliability is significantly impacted on when probability distribution functions that account for the full range of parameter values are applied as inputs. The results also show that the application of the probabilistic models to network components must be considered in the context of either network planning or operation. Furthermore, the risk-based approach applied to the interpretation of reliability indices significantly influences the perception on the network's reliability performance. The risk-based approach allows the uncertainty allowed in a network planning or operation decision to be quantified

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Optimization of vehicle routing and scheduling with travel time variability - application in winter road maintenance

    Get PDF
    This study developed a mathematical model for optimizing vehicle routing and scheduling, which can be used to collect travel time information, and also to perform winter road maintenance operations (e.g., salting, plowing). The objective of this research was to minimize the total vehicle travel time to complete a given set of service tasks, subject to resource constraints (e.g., truck capacity, fleet size) and operational constraints (e.g., service time windows, service time limit). The nature of the problem is to design vehicle routes and schedules to perform the required service on predetermined road segments, which can be interpreted as an arc routing problem (ARP). By using a network transformation technique, an ARP can be transformed into a well-studied node routing problem (NRP). A set-partitioning (SP) approach was introduced to formulate the problem into an integer programming problem (I PP). To solve this problem, firstly, a number of feasible routes were generated, subject to resources and operational constraints. A genetic algorithm based heuristic was developed to improve the efficiency of generating feasible routes. Secondly, the corresponding travel time of each route was computed. Finally, the feasible routes were entered into the linear programming solver (CPL EX) to obtain final optimized results. The impact of travel time variability on vehicle routing and scheduling for transportation planning was also considered in this study. Usually in the concern of vehicle and pedestrian\u27s safety, federal, state governments and local agencies are more leaning towards using a conservative approach with constant travel time for the planning of winter roadway maintenance than an aggressive approach, which means that they would rather have a redundancy of plow trucks than a shortage. The proposed model and solution algorithm were validated with an empirical case study of 41 snow sections in the northwest area of New Jersey. Comprehensive analysis based on a deterministic travel time setting and a time-dependent travel time setting were both performed. The results show that a model that includes time dependent travel time produces better results than travel time being underestimated and being overestimated in transportation planning. In addition, a scenario-based analysis suggests that the current NJDOT operation based on given snow sector design, service routes and fleet size can be improved by the proposed model that considers time dependent travel time and the geometry of the road network to optimize vehicle routing and scheduling. In general, the benefit of better routing and scheduling design for snow plowing could be reflected in smaller minimum required fleet size and shorter total vehicle travel time. The depot location and number of service routes also have an impact on the final optimized results. This suggests that managers should consider the depot location, vehicle fleet sizing and the routing design problem simultaneously at the planning stage to minimize the total cost for snow plowing operations
    • 

    corecore