197 research outputs found
Collecting and Analyzing Failure Data of Bluetooth Personal Area Networks
This work presents a failure data analysis campaign on
Bluetooth Personal Area Networks (PANs) conducted on
two kind of heterogeneous testbeds (working for more than
one year). The obtained results reveal how failures distribution
are characterized and suggest how to improve the
dependability of Bluetooth PANs. Specically, we dene the
failure model and we then identify the most effective recovery
actions and masking strategies that can be adopted for
each failure. We then integrate the discovered recovery actions
and masking strategies in our testbeds, improving the
availability and the reliability of 3.64% (up to 36.6%) and
202% (referred to the Mean Time To Failure), respectively
Integrated Support for Handoff Management and Context-Awareness in Heterogeneous Wireless Networks
The overwhelming success of mobile devices and wireless
communications is stressing the need for the development of
mobility-aware services. Device mobility requires services
adapting their behavior to sudden context changes and being
aware of handoffs, which introduce unpredictable delays and
intermittent discontinuities. Heterogeneity of wireless
technologies (Wi-Fi, Bluetooth, 3G) complicates the situation,
since a different treatment of context-awareness and handoffs is
required for each solution. This paper presents a middleware
architecture designed to ease mobility-aware service
development. The architecture hides technology-specific
mechanisms and offers a set of facilities for context awareness
and handoff management. The architecture prototype works with
Bluetooth and Wi-Fi, which today represent two of the most
widespread wireless technologies. In addition, the paper discusses
motivations and design details in the challenging context of
mobile multimedia streaming applications
Towards Secure Monitoring and Control Systems: Diversify!
This work discusses the role of diversity as a mean towards secure monitoring and control. The intuition underlying the proposal is that diversity can be leveraged to raise the effort it takes to conduct a successful attack (in terms of attack resources and time) to such a level so as to make it pointless to attempt an attack at all. For example, let us consider an attack that requires compromising two machines in order to be successful. If the machines are identical, it suffices to compromise one machine and then repeating the exploit for the other, i.e., the chance of a successful attack PSA to the system is related to the chance of compromising just one machine (PSA≈PM). When the machines are different, PSA is smaller because it becomes somewhat related to chance of compromising each machine separately (i.e., PSA≈PM1×PM2): succeeding is harder and time-consuming. Diversity is not used here to replicate components. We claim that a monitoring and control system, when possible, can smartly combine diverse technologies to significantly increase the effort to conduct a successful attack. Key aspects, issues and future research directions are briefly discussed in the following
Dependability Evaluation of Middleware Technology for Large-scale Distributed Caching
Distributed caching systems (e.g., Memcached) are widely used by service
providers to satisfy accesses by millions of concurrent clients. Given their
large-scale, modern distributed systems rely on a middleware layer to manage
caching nodes, to make applications easier to develop, and to apply load
balancing and replication strategies. In this work, we performed a
dependability evaluation of three popular middleware platforms, namely
Twemproxy by Twitter, Mcrouter by Facebook, and Dynomite by Netflix, to assess
availability and performance under faults, including failures of Memcached
nodes and congestion due to unbalanced workloads and network link bandwidth
bottlenecks. We point out the different availability and performance trade-offs
achieved by the three platforms, and scenarios in which few faulty components
cause cascading failures of the whole distributed system.Comment: 2020 IEEE 31st International Symposium on Software Reliability
Engineering (ISSRE 2020
Monitoring of Aging Software Systems Affected by Integer Overflows
Numerical aging-related bugs, which can manifest themselves as the accumulation of floating-point errors and the overflow of integers, represent a known but relatively neglected issue in the field of software aging and rejuvenation. Unfortunately, it is very difficult to avoid and to fix these bugs, since the rules of computer arithmetic and programming languages are often misunderstood or disregarded by programmers. Even though software rejuvenation can potentially mitigate these problems, its adoption is prevented by the lack of approaches for forecasting numerical software aging failures: in order to efficiently plan rejuvenation, the rate of numerical errors has to be known, or at least estimated. In this paper, we focus on software aging phenomena related to integer overflows. We present some examples of integer overflow issues of the MySQL open-source DBMS, and an approach for identifying symptoms of potential integer overflows by on-line monitoring
An Effective Approach for Injecting Faults in Wireless Sensor Networks Operating Systems
This paper presents an effective approach for injecting
faults/errors in WSN nodes operating systems. The approach
is based on the injection of faults at the assembly
level. Results show that depending on the concurrency
model and on the memory management, the operating systems
react to injected errors differently, indicating that
fault containment strategies and hang-checking assertions
should be implemented to avoid spreading and activations
of errors
Fault Injection Analytics: A Novel Approach to Discover Failure Modes in Cloud-Computing Systems
Cloud computing systems fail in complex and unexpected ways due to unexpected
combinations of events and interactions between hardware and software
components. Fault injection is an effective means to bring out these failures
in a controlled environment. However, fault injection experiments produce
massive amounts of data, and manually analyzing these data is inefficient and
error-prone, as the analyst can miss severe failure modes that are yet unknown.
This paper introduces a new paradigm (fault injection analytics) that applies
unsupervised machine learning on execution traces of the injected system, to
ease the discovery and interpretation of failure modes. We evaluated the
proposed approach in the context of fault injection experiments on the
OpenStack cloud computing platform, where we show that the approach can
accurately identify failure modes with a low computational cost.Comment: IEEE Transactions on Dependable and Secure Computing; 16 pages. arXiv
admin note: text overlap with arXiv:1908.1164
Who Evaluates the Evaluators? On Automatic Metrics for Assessing AI-based Offensive Code Generators
AI-based code generators are an emerging solution for automatically writing
programs starting from descriptions in natural language, by using deep neural
networks (Neural Machine Translation, NMT). In particular, code generators have
been used for ethical hacking and offensive security testing by generating
proof-of-concept attacks. Unfortunately, the evaluation of code generators
still faces several issues. The current practice uses automatic metrics, which
compute the textual similarity of generated code with ground-truth references.
However, it is not clear what metric to use, and which metric is most suitable
for specific contexts. This practical experience report analyzes a large set of
output similarity metrics on offensive code generators. We apply the metrics on
two state-of-the-art NMT models using two datasets containing offensive
assembly and Python code with their descriptions in the English language. We
compare the estimates from the automatic metrics with human evaluation and
provide practical insights into their strengths and limitations
- …