10,922 research outputs found

    Assessing the reliability of multistate flow networks considering distance constraints

    Full text link
    Evaluating the reliability of complex technical networks, such as those in energy distribution, logistics, and transportation systems, is of paramount importance. These networks are often represented as multistate flow networks (MFNs). While there has been considerable research on assessing MFN reliability, many studies still need to pay more attention to a critical factor: transmission distance constraints. These constraints are typical in real-world applications, such as Internet infrastructure, where controlling the distances between data centers, network nodes, and end-users is vital for ensuring low latency and efficient data transmission. This paper addresses the evaluation of MFN reliability under distance constraints. Specifically, it focuses on determining the probability that a minimum of dd flow units can be transmitted successfully from a source node to a sink node, using only paths with lengths not exceeding a predefined distance limit of λ\lambda . We introduce an effective algorithm to tackle this challenge, provide a benchmark example to illustrate its application and analyze its computational complexity

    Building Reliable Budget-Based Binary-State Networks

    Full text link
    Everyday life is driven by various network, such as supply chains for distributing raw materials, semi-finished product goods, and final products; Internet of Things (IoT) for connecting and exchanging data; utility networks for transmitting fuel, power, water, electricity, and 4G/5G; and social networks for sharing information and connections. The binary-state network is a basic network, where the state of each component is either success or failure, i.e., the binary-state. Network reliability plays an important role in evaluating the performance of network planning, design, and management. Because more networks are being set up in the real world currently, there is a need for their reliability. It is necessary to build a reliable network within a limited budget. However, existing studies are focused on the budget limit for each minimal path (MP) in networks without considering the total budget of the entire network. We propose a novel concept to consider how to build a more reliable binary-state network under the budget limit. In addition, we propose an algorithm based on the binary-addition-tree algorithm (BAT) and stepwise vectors to solve the problem efficiently

    Risk mitigation decisions for it security

    Get PDF
    Enterprises must manage their information risk as part of their larger operational risk management program. Managers must choose how to control for such information risk. This article defines the flow risk reduction problem and presents a formal model using a workflow framework. Three different control placement methods are introduced to solve the problem, and a comparative analysis is presented using a robust test set of 162 simulations. One year of simulated attacks is used to validate the quality of the solutions. We find that the math programming control placement method yields substantial improvements in terms of risk reduction and risk reduction on investment when compared to heuristics that would typically be used by managers to solve the problem. The contribution of this research is to provide managers with methods to substantially reduce information and security risks, while obtaining significantly better returns on their security investments. By using a workflow approach to control placement, which guides the manager to examine the entire infrastructure in a holistic manner, this research is unique in that it enables information risk to be examined strategically. © 2014 ACM

    Multi-Objective Model to Improve Network Reliability Level under Limited Budget by Considering Selection of Facilities and Total Service Distance in Rescue Operations

    Get PDF
    Sudden disasters may damage facilities, transportation networks and other critical infrastructures, delay rescue and bring huge losses. Facility selection and reliable transportation network play an important role in emergency rescue. In this paper, the reliability level between two points in a network is defined from the point of view of minimal edge cut and path, respectively, and the equivalence of these two definitions is proven. Based on this, a multi-objective optimization model is proposed. The first goal of the model is to minimize the total service distance, and the second goal is to maximize the network reliability level. The original model is transformed into a model with three objectives, and the three objectives are combined into one objective by the method of weighting. The model is applied to a case, and the results are analyzed to verify the effectiveness of the model

    Aggressive and reliable high-performance architectures - techniques for thermal control, energy efficiency, and performance augmentation

    Get PDF
    As more and more transistors fit in a single chip, consumers of the electronics industry continue to expect decline in cost-per-function. Advancements in process technology offer steady improvements in system performance. The improvements manifest themselves as shrinking area, faster circuits and improved battery life. However, this migration toward sub-micro/nano-meter technologies presents a new set of challenges as the system becomes extremely sensitive to any voltage, temperature or process variations. One approach to immunize the system from the adverse effects of these variations is to add sufficient safety margins to the operating clock frequency of the system. Clearly, this approach is overly conservative because these worst case scenarios rarely occur. But, process technology in nanoscale era has already hit the power and frequency walls. Regardless of any of these challenges, the present processors not only need to run faster, but also cooler and use lesser energy. At a juncture where there is no further improvement in clock frequency is possible, data dependent latching through Timing Speculation (TS) provides a silver lining. Timing speculation is a widely known method for realizing better-than-worst-case systems. TS is aggressive in nature, where the mechanism is to dynamically tune the system frequency beyond the worst-case limits obtained from application characteristics to enhance the performance of system-on-chips (SoCs). However, such aggressive tuning has adverse consequences that need to be overcome. Power dissipation, on-chip temperature and reliability are key issues that cannot be ignored. A carefully designed power management technique combined with a reliable, controlled, aggressive clocking not only attempts to constrain power dissipation within a limit, but also improves performance whenever possible. In this dissertation, we present a novel power level switching mechanism by redefining the existing voltage-frequency pairs. We introduce an aggressive yet reliable framework for energy efficient thermal control. We were able to achieve up to 40% speed-up compared to a base scheme without overclocking. We compare our method against different schemes. We observe that up to 75% Energy-Delay squared product (ED2) savings relative to base architecture is possible. We showcase the loss of efficiency in present chip multiprocessor systems due to excess power supplied, and propose Utilization-aware Task Scheduling (UTS) - a power management scheme that increases energy efficiency of chip multiprocessors. Our experiments demonstrate that UTS along with aggressive timing speculation squeezes out maximum performance from the system without loss of efficiency, and breaching power & thermal constraints. From our evaluation we infer that UTS improves performance by up to 12% due to aggressive power level switching and over 50% in ED2 savings compared to traditional power management techniques. Aggressive clocking systems having TS as their central theme operate at a clock frequency range beyond specified safe limits, exploiting the data dependence on circuit critical paths. However, the margin for performance enhancement is restricted due to extreme difference between short paths and critical paths. In this thesis, we show that increasing the lengths of short paths of the circuit increases the margin of TS, leading to performance improvement in aggressively designed systems. We develop Min-arc algorithm to efficiently add delay buffers to selected short paths while keeping down the area penalty. We show that by using our algorithm, it is possible to increase the circuit contamination delay by up to 30% without affecting the propagation delay, with moderate area overhead. We also explore the possibility of increasing short path delays further by relaxing the constraint on propagation delay, and achieve even higher performance. Overall, we bring out the inter-relationship between power, temperature and reliability of aggressively clocked systems. Our main objective is to achieve maximal performance benefits and improved energy efficiency within thermal constraints by effectively combining dynamic frequency scaling, dynamic voltage scaling and reliable overclocking. We provide solutions to improve the existing power management in chip multiprocessors to dynamically maximize system utilization and satisfy the power constraints within safe thermal limits

    Development of a Parallel BAT and Its Applications in Binary-state Network Reliability Problems

    Full text link
    Various networks are broadly and deeply applied in real-life applications. Reliability is the most important index for measuring the performance of all network types. Among the various algorithms, only implicit enumeration algorithms, such as depth-first-search, breadth-search-first, universal generating function methodology, binary-decision diagram, and binary-addition-tree algorithm (BAT), can be used to calculate the exact network reliability. However, implicit enumeration algorithms can only be used to solve small-scale network reliability problems. The BAT was recently proposed as a simple, fast, easy-to-code, and flexible make-to-fit exact-solution algorithm. Based on the experimental results, the BAT and its variants outperformed other implicit enumeration algorithms. Hence, to overcome the above-mentioned obstacle as a result of the size problem, a new parallel BAT (PBAT) was proposed to improve the BAT based on compute multithread architecture to calculate the binary-state network reliability problem, which is fundamental for all types of network reliability problems. From the analysis of the time complexity and experiments conducted on 20 benchmarks of binary-state network reliability problems, PBAT was able to efficiently solve medium-scale network reliability problems

    Reliability-aware and energy-efficient system level design for networks-on-chip

    Get PDF
    2015 Spring.Includes bibliographical references.With CMOS technology aggressively scaling into the ultra-deep sub-micron (UDSM) regime and application complexity growing rapidly in recent years, processors today are being driven to integrate multiple cores on a chip. Such chip multiprocessor (CMP) architectures offer unprecedented levels of computing performance for highly parallel emerging applications in the era of digital convergence. However, a major challenge facing the designers of these emerging multicore architectures is the increased likelihood of failure due to the rise in transient, permanent, and intermittent faults caused by a variety of factors that are becoming more and more prevalent with technology scaling. On-chip interconnect architectures are particularly susceptible to faults that can corrupt transmitted data or prevent it from reaching its destination. Reliability concerns in UDSM nodes have in part contributed to the shift from traditional bus-based communication fabrics to network-on-chip (NoC) architectures that provide better scalability, performance, and utilization than buses. In this thesis, to overcome potential faults in NoCs, my research began by exploring fault-tolerant routing algorithms. Under the constraint of deadlock freedom, we make use of the inherent redundancy in NoCs due to multiple paths between packet sources and sinks and propose different fault-tolerant routing schemes to achieve much better fault tolerance capabilities than possible with traditional routing schemes. The proposed schemes also use replication opportunistically to optimize the balance between energy overhead and arrival rate. As 3D integrated circuit (3D-IC) technology with wafer-to-wafer bonding has been recently proposed as a promising candidate for future CMPs, we also propose a fault-tolerant routing scheme for 3D NoCs which outperforms the existing popular routing schemes in terms of energy consumption, performance and reliability. To quantify reliability and provide different levels of intelligent protection, for the first time, we propose the network vulnerability factor (NVF) metric to characterize the vulnerability of NoC components to faults. NVF determines the probabilities that faults in NoC components manifest as errors in the final program output of the CMP system. With NVF aware partial protection for NoC components, almost 50% energy cost can be saved compared to the traditional approach of comprehensively protecting all NoC components. Lastly, we focus on the problem of fault-tolerant NoC design, that involves many NP-hard sub-problems such as core mapping, fault-tolerant routing, and fault-tolerant router configuration. We propose a novel design-time (RESYN) and a hybrid design and runtime (HEFT) synthesis framework to trade-off energy consumption and reliability in the NoC fabric at the system level for CMPs. Together, our research in fault-tolerant NoC routing, reliability modeling, and reliability aware NoC synthesis substantially enhances NoC reliability and energy-efficiency beyond what is possible with traditional approaches and state-of-the-art strategies from prior work

    Models and Algorithms for Addressing Travel Time Variability: Applications from Optimal Path Finding and Traffic Equilibrium Problems

    Get PDF
    An optimal path finding problem and a traffic equilibrium problem are two important, fundamental, and interrelated topics in the transportation research field. Under travel time variability, the road networks are considered as stochastic, where the link travel times are treated as random variables with known probability density functions. By considering the effect of travel time variability and corresponding risk-taking behavior of the travelers, this dissertation proposes models and algorithms for addressing travel time variability with applications from optimal path finding and traffic equilibrium problems. Specifically, two new optimal path finding models and two novel traffic equilibrium models are proposed in stochastic networks. To adaptively determine a reliable path with the minimum travel time budget required to meet the user-specified reliability threshold α, an adaptive α-reliable path finding model is proposed. It is formulated as a chance constrained model under a dynamic programming framework. Then, a discrete-time algorithm is developed based on the properties of the proposed model. In addition to accounting for the reliability aspect of travel time variability, the α-reliable mean-excess path finding model further concerns the unreliability aspect of the late trips beyond the travel time budget. It is formulated as a stochastic mixed-integer nonlinear program. To solve this difficult problem, a practical double relaxation procedure is developed. By recognizing travelers are not only interested in saving their travel time but also in reducing their risk of being late, a α-reliable mean-excess traffic equilibrium (METE) model is proposed. Furthermore, a stochastic α-reliable mean-excess traffic equilibrium (SMETE) model is developed by incorporating the travelers’ perception error, where the travelers’ route choice decisions are determined by the perceived distribution of the stochastic travel time. Both models explicitly examine the effects of both reliability and unreliability aspects of travel time variability in a network equilibrium framework. They are both formulated as a variational inequality (VI) problem and solved by a route-based algorithm based on the modified alternating direction method. In conclusion, this study explores the effects of the various aspects (reliability and unreliability) of travel time variability on travelers’ route choice decision process by considering their risk preferences. The proposed models provide novel views of the optimal path finding problem and the traffic equilibrium problem under an uncertain environment, and the proposed solution algorithms enable potential applicability for solving practical problems
    • …
    corecore