41,496 research outputs found
Recommended from our members
Risk mitigation decisions for it security
Enterprises must manage their information risk as part of their larger operational risk management program. Managers must choose how to control for such information risk. This article defines the flow risk reduction problem and presents a formal model using a workflow framework. Three different control placement methods are introduced to solve the problem, and a comparative analysis is presented using a robust test set of 162 simulations. One year of simulated attacks is used to validate the quality of the solutions. We find that the math programming control placement method yields substantial improvements in terms of risk reduction and risk reduction on investment when compared to heuristics that would typically be used by managers to solve the problem. The contribution of this research is to provide managers with methods to substantially reduce information and security risks, while obtaining significantly better returns on their security investments. By using a workflow approach to control placement, which guides the manager to examine the entire infrastructure in a holistic manner, this research is unique in that it enables information risk to be examined strategically. © 2014 ACM
Probabilistic Model Checking for Energy Analysis in Software Product Lines
In a software product line (SPL), a collection of software products is
defined by their commonalities in terms of features rather than explicitly
specifying all products one-by-one. Several verification techniques were
adapted to establish temporal properties of SPLs. Symbolic and family-based
model checking have been proven to be successful for tackling the combinatorial
blow-up arising when reasoning about several feature combinations. However,
most formal verification approaches for SPLs presented in the literature focus
on the static SPLs, where the features of a product are fixed and cannot be
changed during runtime. This is in contrast to dynamic SPLs, allowing to adapt
feature combinations of a product dynamically after deployment. The main
contribution of the paper is a compositional modeling framework for dynamic
SPLs, which supports probabilistic and nondeterministic choices and allows for
quantitative analysis. We specify the feature changes during runtime within an
automata-based coordination component, enabling to reason over strategies how
to trigger dynamic feature changes for optimizing various quantitative
objectives, e.g., energy or monetary costs and reliability. For our framework
there is a natural and conceptually simple translation into the input language
of the prominent probabilistic model checker PRISM. This facilitates the
application of PRISM's powerful symbolic engine to the operational behavior of
dynamic SPLs and their family-based analysis against various quantitative
queries. We demonstrate feasibility of our approach by a case study issuing an
energy-aware bonding network device.Comment: 14 pages, 11 figure
A Cycle-Based Formulation and Valid Inequalities for DC Power Transmission Problems with Switching
It is well-known that optimizing network topology by switching on and off
transmission lines improves the efficiency of power delivery in electrical
networks. In fact, the USA Energy Policy Act of 2005 (Section 1223) states that
the U.S. should "encourage, as appropriate, the deployment of advanced
transmission technologies" including "optimized transmission line
configurations". As such, many authors have studied the problem of determining
an optimal set of transmission lines to switch off to minimize the cost of
meeting a given power demand under the direct current (DC) model of power flow.
This problem is known in the literature as the Direct-Current Optimal
Transmission Switching Problem (DC-OTS). Most research on DC-OTS has focused on
heuristic algorithms for generating quality solutions or on the application of
DC-OTS to crucial operational and strategic problems such as contingency
correction, real-time dispatch, and transmission expansion. The mathematical
theory of the DC-OTS problem is less well-developed. In this work, we formally
establish that DC-OTS is NP-Hard, even if the power network is a
series-parallel graph with at most one load/demand pair. Inspired by Kirchoff's
Voltage Law, we give a cycle-based formulation for DC-OTS, and we use the new
formulation to build a cycle-induced relaxation. We characterize the convex
hull of the cycle-induced relaxation, and the characterization provides strong
valid inequalities that can be used in a cutting-plane approach to solve the
DC-OTS. We give details of a practical implementation, and we show promising
computational results on standard benchmark instances
Building Reliable Budget-Based Binary-State Networks
Everyday life is driven by various network, such as supply chains for
distributing raw materials, semi-finished product goods, and final products;
Internet of Things (IoT) for connecting and exchanging data; utility networks
for transmitting fuel, power, water, electricity, and 4G/5G; and social
networks for sharing information and connections. The binary-state network is a
basic network, where the state of each component is either success or failure,
i.e., the binary-state. Network reliability plays an important role in
evaluating the performance of network planning, design, and management. Because
more networks are being set up in the real world currently, there is a need for
their reliability. It is necessary to build a reliable network within a limited
budget. However, existing studies are focused on the budget limit for each
minimal path (MP) in networks without considering the total budget of the
entire network. We propose a novel concept to consider how to build a more
reliable binary-state network under the budget limit. In addition, we propose
an algorithm based on the binary-addition-tree algorithm (BAT) and stepwise
vectors to solve the problem efficiently
Reliability-aware and energy-efficient system level design for networks-on-chip
2015 Spring.Includes bibliographical references.With CMOS technology aggressively scaling into the ultra-deep sub-micron (UDSM) regime and application complexity growing rapidly in recent years, processors today are being driven to integrate multiple cores on a chip. Such chip multiprocessor (CMP) architectures offer unprecedented levels of computing performance for highly parallel emerging applications in the era of digital convergence. However, a major challenge facing the designers of these emerging multicore architectures is the increased likelihood of failure due to the rise in transient, permanent, and intermittent faults caused by a variety of factors that are becoming more and more prevalent with technology scaling. On-chip interconnect architectures are particularly susceptible to faults that can corrupt transmitted data or prevent it from reaching its destination. Reliability concerns in UDSM nodes have in part contributed to the shift from traditional bus-based communication fabrics to network-on-chip (NoC) architectures that provide better scalability, performance, and utilization than buses. In this thesis, to overcome potential faults in NoCs, my research began by exploring fault-tolerant routing algorithms. Under the constraint of deadlock freedom, we make use of the inherent redundancy in NoCs due to multiple paths between packet sources and sinks and propose different fault-tolerant routing schemes to achieve much better fault tolerance capabilities than possible with traditional routing schemes. The proposed schemes also use replication opportunistically to optimize the balance between energy overhead and arrival rate. As 3D integrated circuit (3D-IC) technology with wafer-to-wafer bonding has been recently proposed as a promising candidate for future CMPs, we also propose a fault-tolerant routing scheme for 3D NoCs which outperforms the existing popular routing schemes in terms of energy consumption, performance and reliability. To quantify reliability and provide different levels of intelligent protection, for the first time, we propose the network vulnerability factor (NVF) metric to characterize the vulnerability of NoC components to faults. NVF determines the probabilities that faults in NoC components manifest as errors in the final program output of the CMP system. With NVF aware partial protection for NoC components, almost 50% energy cost can be saved compared to the traditional approach of comprehensively protecting all NoC components. Lastly, we focus on the problem of fault-tolerant NoC design, that involves many NP-hard sub-problems such as core mapping, fault-tolerant routing, and fault-tolerant router configuration. We propose a novel design-time (RESYN) and a hybrid design and runtime (HEFT) synthesis framework to trade-off energy consumption and reliability in the NoC fabric at the system level for CMPs. Together, our research in fault-tolerant NoC routing, reliability modeling, and reliability aware NoC synthesis substantially enhances NoC reliability and energy-efficiency beyond what is possible with traditional approaches and state-of-the-art strategies from prior work
- …