30,156 research outputs found
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
Applying Real Options Thinking to Information Security in Networked Organizations
An information security strategy of an organization participating in a networked business sets out the plans for designing a variety of actions that ensure confidentiality, availability, and integrity of company’s key information assets. The actions are concerned with authentication and nonrepudiation of authorized users of these assets. We assume that the primary objective of security efforts in a company is improving and sustaining resiliency, which means security contributes to the ability of an organization to withstand discontinuities and disruptive events, to get back to its normal operating state, and to adapt to ever changing risk environments. When companies collaborating in a value web view security as a business issue, risk assessment and cost-benefit analysis techniques are necessary and explicit part of their process of resource allocation and budgeting, no matter if security spendings are treated as capital investment or operating expenditures.
This paper contributes to the application of quantitative approaches to assessing risks, costs, and benefits associated with the various components making up the security strategy of a company participating in value networks. We take a risk-based approach to determining what types of security a strategy should include and how much of each type is enough. We adopt a real-options-based perspective of security and make a proposal to value the extent to which alternative components in a security strategy contribute to organizational resiliency and protect key information assets from being impeded, disrupted, or destroyed
Breaking the habit: measuring and predicting departures from routine in individual human mobility
Researchers studying daily life mobility patterns have recently shown that humans are typically highly predictable in their movements. However, no existing work has examined the boundaries of this predictability, where human behaviour transitions temporarily from routine patterns to highly unpredictable states. To address this shortcoming, we tackle two interrelated challenges. First, we develop a novel information-theoretic metric, called instantaneous entropy, to analyse an individual’s mobility patterns and identify temporary departures from routine. Second, to predict such departures in the future, we propose the first Bayesian framework that explicitly models breaks from routine, showing that it outperforms current state-of-the-art predictor
Why (and How) Networks Should Run Themselves
The proliferation of networked devices, systems, and applications that we
depend on every day makes managing networks more important than ever. The
increasing security, availability, and performance demands of these
applications suggest that these increasingly difficult network management
problems be solved in real time, across a complex web of interacting protocols
and systems. Alas, just as the importance of network management has increased,
the network has grown so complex that it is seemingly unmanageable. In this new
era, network management requires a fundamentally new approach. Instead of
optimizations based on closed-form analysis of individual protocols, network
operators need data-driven, machine-learning-based models of end-to-end and
application performance based on high-level policy goals and a holistic view of
the underlying components. Instead of anomaly detection algorithms that operate
on offline analysis of network traces, operators need classification and
detection algorithms that can make real-time, closed-loop decisions. Networks
should learn to drive themselves. This paper explores this concept, discussing
how we might attain this ambitious goal by more closely coupling measurement
with real-time control and by relying on learning for inference and prediction
about a networked application or system, as opposed to closed-form analysis of
individual protocols
Excitable human dynamics driven by extrinsic events in massive communities
Using empirical data from a social media site (Twitter) and on trading
volumes of financial securities, we analyze the correlated human activity in
massive social organizations. The activity, typically excited by real-world
events and measured by the occurrence rate of international brand names and
trading volumes, is characterized by intermittent fluctuations with bursts of
high activity separated by quiescent periods. These fluctuations are broadly
distributed with an inverse cubic tail and have long-range temporal
correlations with a power spectrum. We describe the activity by a
stochastic point process and derive the distribution of activity levels from
the corresponding stochastic differential equation. The distribution and the
corresponding power spectrum are fully consistent with the empirical
observations.Comment: 9 pages, 3 figure
A Taxonomy of Workflow Management Systems for Grid Computing
With the advent of Grid and application technologies, scientists and
engineers are building more and more complex applications to manage and process
large data sets, and execute scientific experiments on distributed resources.
Such application scenarios require means for composing and executing complex
workflows. Therefore, many efforts have been made towards the development of
workflow management systems for Grid computing. In this paper, we propose a
taxonomy that characterizes and classifies various approaches for building and
executing workflows on Grids. We also survey several representative Grid
workflow systems developed by various projects world-wide to demonstrate the
comprehensiveness of the taxonomy. The taxonomy not only highlights the design
and engineering similarities and differences of state-of-the-art in Grid
workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
- …