7,649 research outputs found
Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning
The secret keys of critical network authorities - such as time, name,
certificate, and software update services - represent high-value targets for
hackers, criminals, and spy agencies wishing to use these keys secretly to
compromise other hosts. To protect authorities and their clients proactively
from undetected exploits and misuse, we introduce CoSi, a scalable witness
cosigning protocol ensuring that every authoritative statement is validated and
publicly logged by a diverse group of witnesses before any client will accept
it. A statement S collectively signed by W witnesses assures clients that S has
been seen, and not immediately found erroneous, by those W observers. Even if S
is compromised in a fashion not readily detectable by the witnesses, CoSi still
guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to
risk that the compromise will soon be detected by one of the W witnesses.
Because clients can verify collective signatures efficiently without
communication, CoSi protects clients' privacy, and offers the first
transparency mechanism effective against persistent man-in-the-middle attackers
who control a victim's Internet access, the authority's secret key, and several
witnesses' secret keys. CoSi builds on existing cryptographic multisignature
methods, scaling them to support thousands of witnesses via signature
aggregation over efficient communication trees. A working prototype
demonstrates CoSi in the context of timestamping and logging authorities,
enabling groups of over 8,000 distributed witnesses to cosign authoritative
statements in under two seconds.Comment: 20 pages, 7 figure
Robustness - a challenge also for the 21st century: A review of robustness phenomena in technical, biological and social systems as well as robust approaches in engineering, computer science, operations research and decision aiding
Notions on robustness exist in many facets. They come from different disciplines and reflect different worldviews. Consequently, they contradict each other very often, which makes the term less applicable in a general context. Robustness approaches are often limited to specific problems for which they have been developed. This means, notions and definitions might reveal to be wrong if put into another domain of validity, i.e. context. A definition might be correct in a specific context but need not hold in another. Therefore, in order to be able to speak of robustness we need to specify the domain of validity, i.e. system, property and uncertainty of interest. As proofed by Ho et al. in an optimization context with finite and discrete domains, without prior knowledge about the problem there exists no solution what so ever which is more robust than any other. Similar to the results of the No Free Lunch Theorems of Optimization (NLFTs) we have to exploit the problem structure in order to make a solution more robust. This optimization problem is directly linked to a robustness/fragility tradeoff which has been observed in many contexts, e.g. 'robust, yet fragile' property of HOT (Highly Optimized Tolerance) systems. Another issue is that robustness is tightly bounded to other phenomena like complexity for which themselves exist no clear definition or theoretical framework. Consequently, this review rather tries to find common aspects within many different approaches and phenomena than to build a general theorem for robustness, which anyhow might not exist because complex phenomena often need to be described from a pluralistic view to address as many aspects of a phenomenon as possible. First, many different robustness problems have been reviewed from many different disciplines. Second, different common aspects will be discussed, in particular the relationship of functional and structural properties. This paper argues that robustness phenomena are also a challenge for the 21st century. It is a useful quality of a model or system in terms of the 'maintenance of some desired system characteristics despite fluctuations in the behaviour of its component parts or its environment' (s. [Carlson and Doyle, 2002], p. 2). We define robustness phenomena as solution with balanced tradeoffs and robust design principles and robustness measures as means to balance tradeoffs. --
Scaling Causality Analysis for Production Systems.
Causality analysis reveals how program values influence each other.
It is important for debugging, optimizing, and understanding the execution of
programs. This thesis scales causality analysis to production systems
consisting of desktop and server applications as well as large-scale Internet
services. This enables developers to employ causality analysis to debug and
optimize complex, modern software systems. This thesis shows that it is
possible to scale causality analysis to both fine-grained instruction level
analysis and analysis of Internet scale distributed systems with thousands of
discrete software components by developing and employing automated methods to
observe and reason about causality.
First, we observe causality at a fine-grained instruction level by developing
the first taint tracking framework to support tracking millions of input
sources. We also introduce flexible taint tracking to allow
for scoping different queries and dynamic filtering of inputs, outputs, and
relationships.
Next, we introduce the Mystery Machine, which uses a ``big data'' approach to
discover causal relationships between software components in a large-scale
Internet service. We leverage the fact that large-scale Internet services
receive a large number of requests in order to observe counterexamples to
hypothesized causal relationships. Using discovered casual relationships, we
identify the critical path for request execution and use the critical path
analysis to explore potential scheduling optimizations.
Finally, we explore using causality to make data-quality tradeoffs in
Internet services. A data-quality tradeoff is an explicit decision by a software
component to return lower-fidelity data in order to improve response time or
minimize resource usage. We perform a study of data-quality tradeoffs in a
large-scale Internet service to show the pervasiveness of these
tradeoffs. We develop DQBarge, a system that enables better data-quality
tradeoffs by propagating critical information along the causal path of request
processing. Our evaluation shows that DQBarge helps Internet services mitigate
load spikes, improve utilization of spare resources, and implement dynamic
capacity planning.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/135888/1/mcchow_1.pd
Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance
Research into active networking has provided the incentive to re-visit what
has traditionally been classified as distinct properties and characteristics of
information transfer such as protocol versus service; at a more fundamental
level this paper considers the blending of computation and communication by
means of complexity. The specific service examined in this paper is network
self-prediction enabled by Active Virtual Network Management Prediction.
Computation/communication is analyzed via Kolmogorov Complexity. The result is
a mechanism to understand and improve the performance of active networking and
Active Virtual Network Management Prediction in particular. The Active Virtual
Network Management Prediction mechanism allows information, in various states
of algorithmic and static form, to be transported in the service of prediction
for network management. The results are generally applicable to algorithmic
transmission of information. Kolmogorov Complexity is used and experimentally
validated as a theory describing the relationship among algorithmic
compression, complexity, and prediction accuracy within an active network.
Finally, the paper concludes with a complexity-based framework for Information
Assurance that attempts to take a holistic view of vulnerability analysis
A Strategic Approach to Agricultural Research Program Planning in Sub-Saharan Africa
Recent studies have shown that agricultural research can have high payoffs in Africa, but impact depends on how well technology fits with evolving needs and capacity in the agricultural sector and the rest of the economy. Structural adjustment policies (e.g., market liberalization, currency devaluation) and political change are transforming user demands for new technology and the economic environment in which technology must perform. The challenge is how to design agricultural research as a strategic input to promote broad-based economic growth, structural transformation, and food security in the increasingly market-driven, but fragile, economies of Africa.Food Security, Food Policy, Agricultural Research, Research and Development/Tech Change/Emerging Technologies, Downloads May 2008-July 2009: 44, Q18,
Supply chain risk management: capabilities and performance
Growing environmental turbulence and increasingly complex supply chain networks have resulted in greater supply chain disruptions. Firm supply chain risk management performance varies due to differences in recognition of the need for and ability to cultivate supply chain risk management capabilities. This study helps to identify which capabilities have the greatest effect in supply chain risk management and firm performance as well as describes how to achieve them. A meta-analysis of empirical supply chain risk management studies reveals the confounding state of the field and points toward future work which can provide consensus and progress. A multiple case study describes organizational learning from supply chain disruption and identifies a new construct of bracketing necessary to deviate from firm risk dominant logic and respond to changes in the environment
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
- …