181 research outputs found

    Generating non-conspiratorial executions

    Get PDF
    Avoiding conspiratorial executions is useful for debugging, model checking or refinement, and helps implement several wellknown problems in faulty environments; furthermore, avoiding non-equivalence robust executions prevents conflicting observations in a distributed setting from occurring. Our results prove that scheduling pairs of states and transitions in a strongly fair manner suf-fices to prevent conspiratorial executions; we then establish a formal connection between conspiracies and equivalence robustness; finally, we present a transformation scheme to implement our results and show how to build them into a well-known distributed scheduler. Previous results were applicable to a subset of systems only, just attempted to characterise potential conspiracies, or were tightly bound up with a particular interaction model.Comisión Interministerial de Ciencia y Tecnología TIC2003-02737-C0

    A Stochastic Model of Plausibility in Live-Virtual-Constructive Environments

    Get PDF
    Distributed live-virtual-constructive simulation promises a number of benefits for the test and evaluation community, including reduced costs, access to simulations of limited availability assets, the ability to conduct large-scale multi-service test events, and recapitalization of existing simulation investments. However, geographically distributed systems are subject to fundamental state consistency limitations that make assessing the data quality of live-virtual-constructive experiments difficult. This research presents a data quality model based on the notion of plausible interaction outcomes. This model explicitly accounts for the lack of absolute state consistency in distributed real-time systems and offers system designers a means of estimating data quality and fitness for purpose. Experiments with World of Warcraft player trace data validate the plausibility model and exceedance probability estimates. Additional experiments with synthetic data illustrate the model\u27s use in ensuring fitness for purpose of live-virtual-constructive simulations and estimating the quality of data obtained from live-virtual-constructive experiments

    Digital Economy and the Response of the Law

    Get PDF

    Timely processing of big data in collaborative large-scale distributed systems

    Get PDF
    Today’s Big Data phenomenon, characterized by huge volumes of data produced at very high rates by heterogeneous and geographically dispersed sources, is fostering the employment of large-scale distributed systems in order to leverage parallelism, fault tolerance and locality awareness with the aim of delivering suitable performances. Among the several areas where Big Data is gaining increasing significance, the protection of Critical Infrastructure is one of the most strategic since it impacts on the stability and safety of entire countries. Intrusion detection mechanisms can benefit a lot from novel Big Data technologies because these allow to exploit much more information in order to sharpen the accuracy of threats discovery. A key aspect for increasing even more the amount of data at disposal for detection purposes is the collaboration (meant as information sharing) among distinct actors that share the common goal of maximizing the chances to recognize malicious activities earlier. Indeed, if an agreement can be found to share their data, they all have the possibility to definitely improve their cyber defenses. The abstraction of Semantic Room (SR) allows interested parties to form trusted and contractually regulated federations, the Semantic Rooms, for the sake of secure information sharing and processing. Another crucial point for the effectiveness of cyber protection mechanisms is the timeliness of the detection, because the sooner a threat is identified, the faster proper countermeasures can be put in place so as to confine any damage. Within this context, the contributions reported in this thesis are threefold * As a case study to show how collaboration can enhance the efficacy of security tools, we developed a novel algorithm for the detection of stealthy port scans, named R-SYN (Ranked SYN port scan detection). We implemented it in three distinct technologies, all of them integrated within an SR-compliant architecture that allows for collaboration through information sharing: (i) in a centralized Complex Event Processing (CEP) engine (Esper), (ii) in a framework for distributed event processing (Storm) and (iii) in Agilis, a novel platform for batch-oriented processing which leverages the Hadoop framework and a RAM-based storage for fast data access. Regardless of the employed technology, all the evaluations have shown that increasing the number of participants (that is, increasing the amount of input data at disposal), allows to improve the detection accuracy. The experiments made clear that a distributed approach allows for lower detection latency and for keeping up with higher input throughput, compared with a centralized one. * Distributing the computation over a set of physical nodes introduces the issue of improving the way available resources are assigned to the elaboration tasks to execute, with the aim of minimizing the time the computation takes to complete. We investigated this aspect in Storm by developing two distinct scheduling algorithms, both aimed at decreasing the average elaboration time of the single input event by decreasing the inter-node traffic. Experimental evaluations showed that these two algorithms can improve the performance up to 30%. * Computations in online processing platforms (like Esper and Storm) are run continuously, and the need of refining running computations or adding new computations, together with the need to cope with the variability of the input, requires the possibility to adapt the resource allocation at runtime, which entails a set of additional problems. Among them, the most relevant concern how to cope with incoming data and processing state while the topology is being reconfigured, and the issue of temporary reduced performance. At this aim, we also explored the alternative approach of running the computation periodically on batches of input data: although it involves a performance penalty on the elaboration latency, it allows to eliminate the great complexity of dynamic reconfigurations. We chose Hadoop as batch-oriented processing framework and we developed some strategies specific for dealing with computations based on time windows, which are very likely to be used for pattern recognition purposes, like in the case of intrusion detection. Our evaluations provided a comparison of these strategies and made evident the kind of performance that this approach can provide

    Connecting the Speed-Accuracy Trade-Offs in Sensorimotor Control and Neurophysiology Reveals Diversity Sweet Spots in Layered Control Architectures

    Get PDF
    Nervous systems sense, communicate, compute, and actuate movement using distributed components with trade-offs in speed, accuracy, sparsity, noise, and saturation. Nevertheless, the resulting control can achieve remarkably fast, accurate, and robust performance due to a highly effective layered control architecture. However, this architecture has received little attention from the existing research. This is in part because of the lack of theory that connects speed-accuracy trade-offs (SATs) in the components neurophysiology with system-level sensorimotor control and characterizes the overall system performance when different layers (planning vs. reflex layer) act work jointly. In thesis, we present a theoretical framework that provides a synthetic perspective of both levels and layers. We then use this framework to clarify the properties of effective layered architectures and explain why there exists extreme diversity across layers (planning vs. reflex layers) and within levels (sensorimotor versus neural/muscle hardware levels). The framework characterizes how the sensorimotor SATs are constrained by the component SATs of neurons communicating with spikes and their sensory and muscle endpoints, in both stochastic and deterministic models. The theoretical predictions are also verified using driving experiments. Our results lead to a novel concept, termed ``diversity sweet spots (DSSs)'': the appropriate diversity in the properties of neurons and muscles across layers and within levels help create systems that are both fast and accurate despite being built from components that are individually slow or inaccurate. At the component level, this concept explains why there are extreme heterogeneities in the neural or muscle composition. At the system level, DSSs explain the benefits of layering to allow extreme heterogeneities in speed and accuracy in different sensorimotor loops. Similar issues and properties also extend down to the cellular level in biology and outward to our most advanced network technologies from smart grid to the Internet of Things. We present our initial step in expanding our framework to that area and widely-open area of research for future direction

    D11.2 Consolidated results on the performance limits of wireless communications

    Get PDF
    Deliverable D11.2 del projecte europeu NEWCOM#The report presents the Intermediate Results of N# JRAs on Performance Limits of Wireless Communications and highlights the fundamental issues that have been investigated by the WP1.1. The report illustrates the Joint Research Activities (JRAs) already identified during the first year of the project which are currently ongoing. For each activity there is a description, an illustration of the adherence and relevance with the identified fundamental open issues, a short presentation of the preliminary results, and a roadmap for the joint research work in the next year. Appendices for each JRA give technical details on the scientific activity in each JRA.Peer ReviewedPreprin
    • …
    corecore