378 research outputs found

    Unidirectional Quorum-based Cycle Planning for Efficient Resource Utilization and Fault-Tolerance

    Full text link
    In this paper, we propose a greedy cycle direction heuristic to improve the generalized R\mathbf{R} redundancy quorum cycle technique. When applied using only single cycles rather than the standard paired cycles, the generalized R\mathbf{R} redundancy technique has been shown to almost halve the necessary light-trail resources in the network. Our greedy heuristic improves this cycle-based routing technique's fault-tolerance and dependability. For efficiency and distributed control, it is common in distributed systems and algorithms to group nodes into intersecting sets referred to as quorum sets. Optimal communication quorum sets forming optical cycles based on light-trails have been shown to flexibly and efficiently route both point-to-point and multipoint-to-multipoint traffic requests. Commonly cycle routing techniques will use pairs of cycles to achieve both routing and fault-tolerance, which uses substantial resources and creates the potential for underutilization. Instead, we use a single cycle and intentionally utilize R\mathbf{R} redundancy within the quorum cycles such that every point-to-point communication pairs occur in at least R\mathbf{R} cycles. Without the paired cycles the direction of the quorum cycles becomes critical to the fault tolerance performance. For this we developed a greedy cycle direction heuristic and our single fault network simulations show a reduction of missing pairs by greater than 30%, which translates to significant improvements in fault coverage.Comment: Computer Communication and Networks (ICCCN), 2016 25th International Conference on. arXiv admin note: substantial text overlap with arXiv:1608.05172, arXiv:1608.05168, arXiv:1608.0517

    Self-organization and management of wireless sensor networks

    Get PDF
    Wireless sensor networks (WSNs) are a newly deployed networking technology consisting of multifunctional sensor nodes that are small in size and communicate over short distances. These sensor nodes are mainly in large numbers and are densely deployed either inside the phenomenon or very close to it. They can be used for various application areas (e.g. health, military, home). WSNs provide several advantages over traditional networks, such as large-scale deployment, highresolution sensed data, and application adaptive mechanisms. However, due to their unique characteristics (having dynamic topology, ad-hoc and unattended deployment, huge amount of data generation and traffic flow, limited bandwidth and energy), WSNs pose considerable challenges for network management and make application development nontrivial. Management of wireless sensor networks is extremely important in order to keep the whole network and application work properly and continuously. Despite the importance of sensor network management, there is no generalize solution available for managing and controlling these resource constrained WSNs. In network management of WSNs, energy-efficient network selforganization is one of the main challenging issues. Self-organization is the property which the sensor nodes must have to organize themselves to form the network. Selforganization of WSNs is challenging because of the tight constraints on the bandwidth and energy resources available in these networks. A self organized sensor network can be clustered or grouped into an easily manageable network. However, existing clustering schemes offer various limitations. For example, existing clustering schemes consume too much energy in cluster formation and re-formation. This thesis presents a novel cellular self-organizing hierarchical architecture for wireless sensor networks. The cellular architecture extends the network life time by efficiently utilizing nodes energy and support the scalability of the system. We have analyzed the performance of the architecture analytically and by simulations. The results obtained from simulation have shown that our cellular architecture is more energy efficient and achieves better energy consumption distribution. The cellular architecture is then mapped into a management framework to support the network management system for resource constraints WSNs. The management framework is self-managing and robust to changes in the network. It is application-co-operative and optimizes itself to support the unique requirements of each application. The management framework consists of three core functional areas i.e., configuration management, fault management, and mobility management. For configuration management, we have developed a re-configuration algorithm to support sensor networks to energy-efficiently re-form the network topology due to network dynamics i.e. node dying, node power on and off, new node joining the network and cells merging. In the area of fault management we have developed a new fault management mechanism to detect failing nodes and recover the connectivity in WSNs. For mobility management, we have developed a two phase sensor relocation solution: redundant mobile sensors are first identified and then relocated to the target location to deal with coverage holes. All the three functional areas have been evaluated and compared against existing solutions. Evaluation results show a significant improvement in terms of re-configuration, failure detection and recovery, and sensors relocation

    Contributions to High-Throughput Computing Based on the Peer-to-Peer Paradigm

    Get PDF
    XII, 116 p.This dissertation focuses on High Throughput Computing (HTC) systems and how to build a working HTC system using Peer-to-Peer (P2P) technologies. The traditional HTC systems, designed to process the largest possible number of tasks per unit of time, revolve around a central node that implements a queue used to store and manage submitted tasks. This central node limits the scalability and fault tolerance of the HTC system. A usual solution involves the utilization of replicas of the master node that can replace it. This solution is, however, limited by the number of replicas used. In this thesis, we propose an alternative solution that follows the P2P philosophy: a completely distributed system in which all worker nodes participate in the scheduling tasks, and with a physically distributed task queue implemented on top of a P2P storage system. The fault tolerance and scalability of this proposal is, therefore, limited only by the number of nodes in the system. The proper operation and scalability of our proposal have been validated through experimentation with a real system. The data availability provided by Cassandra, the P2P data management framework used in our proposal, is analysed by means of several stochastic models. These models can be used to make predictions about the availability of any Cassandra deployment, as well as to select the best possible con guration of any Cassandra system. In order to validate the proposed models, an experimentation with real Cassandra clusters is made, showing that our models are good descriptors of Cassandra's availability. Finally, we propose a set of scheduling policies that try to solve a common problem of HTC systems: re-execution of tasks due to a failure in the node where the task was running, without additional resource misspending. In order to reduce the number of re-executions, our proposals try to nd good ts between the reliability of nodes and the estimated length of each task. An extensive simulation-based experimentation shows that our policies are capable of reducing the number of re-executions, improving system performance and utilization of nodes

    Information Leakage Attacks and Countermeasures

    Get PDF
    The scientific community has been consistently working on the pervasive problem of information leakage, uncovering numerous attack vectors, and proposing various countermeasures. Despite these efforts, leakage incidents remain prevalent, as the complexity of systems and protocols increases, and sophisticated modeling methods become more accessible to adversaries. This work studies how information leakages manifest in and impact interconnected systems and their users. We first focus on online communications and investigate leakages in the Transport Layer Security protocol (TLS). Using modern machine learning models, we show that an eavesdropping adversary can efficiently exploit meta-information (e.g., packet size) not protected by the TLS’ encryption to launch fingerprinting attacks at an unprecedented scale even under non-optimal conditions. We then turn our attention to ultrasonic communications, and discuss their security shortcomings and how adversaries could exploit them to compromise anonymity network users (even though they aim to offer a greater level of privacy compared to TLS). Following up on these, we delve into physical layer leakages that concern a wide array of (networked) systems such as servers, embedded nodes, Tor relays, and hardware cryptocurrency wallets. We revisit location-based side-channel attacks and develop an exploitation neural network. Our model demonstrates the capabilities of a modern adversary but also presents an inexpensive tool to be used by auditors for detecting such leakages early on during the development cycle. Subsequently, we investigate techniques that further minimize the impact of leakages found in production components. Our proposed system design distributes both the custody of secrets and the cryptographic operation execution across several components, thus making the exploitation of leaks difficult

    Behind the last line of defense: Surviving SoC faults and intrusions

    Get PDF
    Today, leveraging the enormous modular power, diversity and flexibility of manycore systems-on-a-chip (SoCs) requires careful orchestration of complex and heterogeneous resources, a task left to low-level software, e.g., hypervisors. In current architectures, this software forms a single point of failure and worthwhile target for attacks: once compromised, adversaries can gain access to all information and full control over the platform and the environment it controls. This article proposes Midir, an enhanced manycore architecture, effecting a paradigm shift from SoCs to distributed SoCs. Midir changes the way platform resources are controlled, by retrofitting tile-based fault containment through well known mechanisms, while securing low-overhead quorum-based consensus on all critical operations, in particular privilege management and, thus, management of containment domains. Allowing versatile redundancy management, Midir promotes resilience for all software levels, including at low level. We explain this architecture, its associated algorithms and hardware mechanisms and show, for the example of a Byzantine fault tolerant microhypervisor, that it outperforms the highly efficient MinBFT by one order of magnitude
    corecore