153 research outputs found

    Model checking a decentralized storage deduplication protocol

    Get PDF
    Fifth Latin-American Symposium on Dependable Computing (LADC)Deduplication of live storage volumes in a cloud computing environment is better done by post-processing: by delaying discovery and removal of duplicate data after I/O requests have been concluded, impact in latency can be minimized. When compared to traditional deduplication in backup systems, which can be done in-line and in a centralized fashion, distribution and concurrency lead to increased complexity. This paper outlines a deduplication algorithm for a typical cloud infrastructure with a common storage pool and summarizes how model-checking with the TLA+ toolset was used to uncover and correct some subtle concurrency issues

    Achieving eventual leader election in WS-discovery

    Get PDF
    Fifth Latin-American Symposium on Dependable Computing (LADC)The Devices Profile for Web Services (DPWS) provides the foundation for seamless deployment, autonomous configuration, and joint operation for various computing devices in environments ranging from simple personal multimedia setups and home automation to complex industrial equipment and large data centers. In particular, WS-Discovery provides dynamic rendezvous for clients and services embodied in such devices. Unfortunately, failure detection implicit in this standard is very limited, both by embodying static timing assumptions and by omitting liveness monitoring, leading to undesirable situations in demanding application scenarios. In this paper we identify these undesirable outcomes and propose an extension of WS-Discovery that allows failure detection to achieve eventual leader election, thus preventing them

    A SIMPLE AUTOMOTIVE APPLICATION USING FLEXRAY™ PROTOCOL

    Get PDF
    FlexRay™ protocol is emerging as the next generation automotive communication protocol which offers high data rate, deterministic, fault tolerant, flexible in-vehicle data communication. This protocol supports both time triggered and event triggered data communication. The network that uses FlexRay™ protocol is called FlexRay™ network. The need for FlexRay™ protocol is the substantial demand for the high capacity in-vehicle data communication between the electronic components. In this work, we used Infineon SoCs as FlexRay™ nodes and establish communication between multiple nodes using FlexRay™ protocol. A simple automotive application is developed with temperature and magnetic field sensor being connected to a node and the sensor data is being communicated over the FlexRay™ network

    Analisis Performance Central Prosessing Unit (CPU) Realtime Menggunakan Metode Benchmarking

    Get PDF
    Perkembangan teknologi semakin berkembang cepat baik dari performa, grafik, bandwidth dan lain-lainnya sehingga mempengaruhi berbagai sendi kehidupan dan profesi, hal ini menyebabkan perubahan sistem pada piranti atau kinerja pada central prosessing unit. Pada dunia bisnis, saat ini telah memfaatkan kemajuan teknologi informasi demi kelancaran kerja dibidang yang digeluti baik sekala kecil maupun sekala besar. Metode yang digunakan benchmarking merupakan suatu proses mengidetifikasi terhadap hardware dan proses suatu tolak ukur sebuah performa yang diharapkan. Adapun langkah pengujian melakukan evalusi kinerja central prosessing unit (CPU) yang dilakukan pada kinerja hardware atau perangkat keras baik prosessor, ram, vega dan lain sebagainya. Hasil pengujian yang dilaksanakan pada cental prosessing unit (CPU) penggunaan ram oleh prosessor i3 sebesar 3.1 Gb, GPU 3%, Disk uses 1%, penggunaan network atau jaringan 7.7 Mbps, penggunaan power suplay very low. Prosessor i5 sebesar 4.2 Gb, GPU 0%, Disk uses 0%, penggunaan network atau jaringan 7.7 Mbps, penggunaan power suplay low. Prosessor i7 sebesar 2.5 Gb, GPU 9%, Disk uses 9%, penggunaan network atau jaringan 104 Kbps, penggunaan power suplay high

    Analysis of results in Dependability Benchmarking: Can we do better?"

    Full text link
    ©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Dependability benchmarking has become through the years more and more important in the process of systems evaluation. The increasing need for making systems more dependable in presence of perturbations has contributed to this fact. Nevertheless, even though many studies have focused on different areas related to dependability benchmarking, and some others have focused on the need of providing these benchmarks with good quality measures, there is still a gap in the process of the analysis of results. This paper focuses on providing a first glance at different approaches that may help filling this gap by making explicit the criteria followed in the decision making process.This work is partially supported by the Spanish project ARENES (TIN2012-38308-C02-01), the ANR French project AMORES (ANR-11-INSE-010), and the Intel Doctoral Student Honour Programme 2012.Martínez, M.; Andrés, DD.; Ruiz García, JC.; Friginal López, J. (2013). Analysis of results in Dependability Benchmarking: Can we do better?". IEEE. https://doi.org/10.1109/IWMN.2013.6663790

    DataFlasks : an epidemic dependable key-value substrate

    Get PDF
    Recently, tuple-stores have become pivotal struc- tures in many information systems. Their ability to handle large datasets makes them important in an era with unprecedented amounts of data being produced and exchanged. However, these tuple-stores typically rely on structured peer-to-peer protocols which assume moderately stable environments. Such assumption does not always hold for very large scale systems sized in the scale of thousands of machines. In this paper we present a novel approach to the design of a tuple-store. Our approach follows a stratified design based on an unstructured substrate. We focus on this substrate and how the use of epidemic protocols allow reaching high dependability and scalability.(undefined

    The rise of software vulnerability: Taxonomy of software vulnerabilities detection and machine learning approaches

    Get PDF
    The detection of software vulnerability requires critical attention during the development phase to make it secure and less vulnerable. Vulnerable software always invites hackers to perform malicious activities and disrupt the operation of the software, which leads to millions in financial losses to software companies. In order to reduce the losses, there are many reliable and effective vulnerability detection systems introduced by security communities aiming to detect the software vulnerabilities as early as in the development or testing phases. To summarise the software vulnerability detection system, existing surveys discussed the conventional and data mining approaches. These approaches are widely used and mostly consist of traditional detection techniques. However, they lack discussion on the newly trending machine learning approaches, such as supervised learning and deep learning techniques. Furthermore, existing studies fail to discuss the growing research interest in the software vulnerability detection community throughout the years. With more discussion on this, we can predict and focus on what are the research problems in software vulnerability detection that need to be urgently addressed. Aiming to reduce these gaps, this paper presents the research interests’ taxonomy in software vulnerability detection, such as methods, detection, features, code and dataset. The research interest categories exhibit current trends in software vulnerability detection. The analysis shows that there is considerable interest in addressing methods and detection problems, while only a few are interested in code and dataset problems. This indicates that there is still much work to be done in terms of code and dataset problems in the future. Furthermore, this paper extends the machine learning approaches taxonomy, which is used to detect the software vulnerabilities, like supervised learning, semi-supervised learning, ensemble learning and deep learning. Based on the analysis, supervised learning and deep learning approaches are trending in the software vulnerability detection community as these techniques are able to detect vulnerabilities such as buffer overflow, SQL injection and cross-site scripting effectively with a significant detection performance, up to 95% of F1 score. Finally, this paper concludes with several discussions on potential future work in software vulnerability detection in terms of datasets, multi-vulnerabilities detection, transfer learning and real-world applications

    Load Balancing in Cloud Computing Empowered with Dynamic Divisible Load Scheduling Method

    Get PDF
    The need to process and dealing with a vast amount of data is increasing with the developing technology. One of the leading promising technology is Cloud Computing, enabling one to accomplish desired goals, leading to performance enhancement. Cloud Computing comes into play with the debate on the growing requirements of data capabilities and storage capacities. Not every organization has the financial resources, infrastructure & human capital, but Cloud Computing offers an affordable infrastructure based on availability, scalability, and cost-efficiency. The Cloud can provide services to clients on-demand, making it the most adapted system for virtual storage, but still, it has some issues not adequately addressed and resolved. One of those issues is that load balancing is a primary challenge, and it is required to balance the traffic on every peer adequately rather than overloading an individual node. This paper provides an intelligent workload management algorithm, which systematically balances traffic and homogeneously allocates the load on every node & prevents overloading, and increases the response time for maximum performance enhancement

    Multi-hop Byzantine reliable broadcast with honest dealer made practical

    Get PDF
    We revisit Byzantine tolerant reliable broadcast with honest dealer algorithms in multi-hop networks. To tolerate Byzantine faulty nodes arbitrarily spread over the network, previous solutions require a factorial number of messages to be sent over the network if the messages are not authenticated (e.g., digital signatures are not available). We propose modifications that preserve the safety and liveness properties of the original unauthenticated protocols, while highly decreasing their observed message complexity when simulated on several classes of graph topologies, potentially opening to their employment
    • …
    corecore