12 research outputs found

    Implementing a flexible failure detector that expresses the confidence in the system

    Get PDF
    International audienceTraditional unreliable failure detectors are per process oracles that provide a list of nodes suspected of having failed. Previously, we introduced the Impact failure detector that outputs a trust level value which is the degree of confidence in the system. An impact factor is assigned to each node and the trust level is equal to the sum of the impact factors of the nodes not suspected to have failed. An input threshold parameter definesan impact factor limit value, over which the confidence degree on the system is ensured. The impact factor indicates the relative importance of the process in the set S, while the threshold offers a degree of flexibility for failures and false suspicions.We propose in this article two different algorithms, based on query-response message rounds, that implement the Impact FD whose conceptions were tailored to satisfy the Impact FD’s flexibility. The first one exploits the time-free message pattern approach while the second one considers a set of bounded timely responses. We also introduced the concept that a process can be PS−accessible (or ♦PS−accessible) which guarantees that the system S will always (or eventually always) be trusted to this process as well as two properties, P R(IT ) and PR(♦IT ), that characterize the minimum necessary stability condition of S that ensures confidence (or eventual confidence) on it. In both implementations, if the process that monitors S is P S−accessible or ♦PS−accessible, at every query round, it only waits (or eventually only waits) for a set of responsethat satisfy the threshold. A crucial facet of this set of processes is that it is not fixed, i.e., the set of processes can change at each round, which is in accordance with the flexibility capacity of the Impact FD

    Aten: A Dispatcher for Big Data Applications in Heterogeneous Systems

    Get PDF
    International audienceStream Processing Engines (SPEs) have to support high data ingestion to ensure the quality and efficiency for the end-user or a system administrator. The data flow processed by SPE fluctuates over time, and requires real-time or near real-time resource pool adjustments (network, memory, CPU and other). This scenario leads to the problem known as skewed data production caused by the non-uniform incoming flow at specific points on the environment, resulting in slow down of applications caused by network bottlenecks and inefficient load balance. This work proposes Aten as a solution to overcome unbalanced data flows processed by Big Data Stream applications in heterogeneous systems. Aten manages data aggregation and data streams within message queues, assuming different algorithms as strategies to partition data flow over all the available computational resources. The paper presents preliminary results indicating that is possible to maximize the throughput and also provide low latency levels for SPEs

    Entropy to Mitigate Non-IID Data Problem on Federated Learning for the Edge Intelligence Environment

    No full text
    Machine Learning (ML) algorithms process input data making it possible to recognize and extract patterns from a large data volume. Likewise, Internet of Things (IoT) devices provide knowledge in a Federated Learning (FL) environment, sharing parameters without compromising their raw data. However, FL suffers from non-independent and identically distributed (non-iid) data, which means it is heterogeneous data and has biased input data, such as in smartphone data sources. This bias causes low convergence for ML algorithms and high energy and bandwidth consumption. This work proposes a method that mitigates non-iid data through a FedAvg-BE algorithm that provides Federated Learning with the border entropy evaluation to select quality data from each device by cross-device in a non-iid data environment. Extensive experiments were performed using publicly available datasets to train deep neural networks. The experiment result evaluation demonstrates that execution time saves up to 22% for the MNIST dataset and 26% for the CIFAR-10 dataset, with the proposed model in Federated Learning settings. The results demonstrate the feasibility of the proposed model to mitigate non-iid data impact

    Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems

    No full text
    Avery important component of a parallel system that generates irregular computational patterns is its work distribution strategy. Scheduling strategies for these kinds of systems must be smart enough in order to dynamically balance workload while not incurring a very high overhead. Logic programs running on parallel logic programming systems are examples of irregular parallel computations. The two main forms of parallelism exploited by parallel logic programming systems are: and-parallelism, that arises when several literals in the body of a clause can execute in parallel, and or-parallelism, that arises when several alternative clauses in the database program can be selected in parallel. In this work we show that scheduling strategies for distributing and-work and or-work in parallel logic programming systems must combine information obtained at compile-time with runtime information whenever possible, in order to obtain better performance. The information obtained at compile-time has two advantages over current implemented systems that use only runtime information: (1) the user does not need to adjust parameters in order to estimate sizes of and-work and orwork for the programs; (2) the schedulers can use more accurate estimates of sizes of and-work and or-work to make better decisions at runtime. We did our experiments with Andorra-I, a parallel logic programming system that exploits both determinate and-parallelism and or-parallelism. In order to obtain compile-time granularity information we used the ORCA tool. Our benchmark set ranges from programs containing and-parallelism only, or-parallelism only and a combination of both and-, and or-parallelism. Our results show that, when well designed, scheduling strategies can actually bene t from compile-time granularity information

    DNA damage Mechanisms and principles of homology search during recombination

    No full text
    Homologous recombination is crucial for genome stability and for genetic exchange. Although our knowledge of the principle steps in recombination and its machinery is well advanced, homology search, the critical step of exploring the genome for homologous sequences to enable recombination, has remained mostly enigmatic. However, recent methodological advances have provided considerable new insights into this fundamental step in recombination that can be integrated into a mechanistic model. These advances emphasize the importance of genomic proximity and nuclear organization for homology search and the critical role of homology search mediators in this process. They also aid our understanding of how homology search might lead to unwanted and potentially disease-promoting recombination events
    corecore