2,487 research outputs found

    Understanding collaboration in volunteer computing systems

    Get PDF
    Volunteer computing is a paradigm in which devices participating in a distributed environment share part of their resources to help others perform their activities. The effectiveness of this computing paradigm depends on the collaboration attitude adopted by the participating devices. Unfortunately for software designers it is not clear how to contribute with local resources to the shared environment without compromising resources that could then be required by the contributors. Therefore, many designers adopt a conservative position when defining the collaboration strategy to be embedded in volunteer computing applications. This position produces an underutilization of the devices’ local resources and reduces the effectiveness of these solutions. This article presents a study that helps designers understand the impact of adopting a particular collaboration attitude to contribute with local resources to the distributed shared environment. The study considers five collaboration strategies, which are analyzed in computing environments with both, abundance and scarcity of resources. The obtained results indicate that collaboration strategies based on effort-based incentives work better than those using contribution-based incentives. These results also show that the use of effort-based incentives does not jeopardize the availability of local resources for the local needs.Peer ReviewedPostprint (published version

    A Survey on the Network Models applied in the Industrial Network Optimization

    Full text link
    Network architecture design is very important for the optimization of industrial networks. The type of network architecture can be divided into small-scale network and large-scale network according to its scale. Graph theory is an efficient mathematical tool for network topology modeling. For small-scale networks, its structure often has regular topology. For large-scale networks, the existing research mainly focuses on the random characteristics of network nodes and edges. Recently, popular models include random networks, small-world networks and scale-free networks. Starting from the scale of network, this survey summarizes and analyzes the network modeling methods based on graph theory and the practical application in industrial scenarios. Furthermore, this survey proposes a novel network performance metric - system entropy. From the perspective of mathematical properties, the analysis of its non-negativity, monotonicity and concave-convexity is given. The advantage of system entropy is that it can cover the existing regular network, random network, small-world network and scale-free network, and has strong generality. The simulation results show that this metric can realize the comparison of various industrial networks under different models.Comment: 26 pages, 11 figures, Journa

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    HoPP: Robust and Resilient Publish-Subscribe for an Information-Centric Internet of Things

    Full text link
    This paper revisits NDN deployment in the IoT with a special focus on the interaction of sensors and actuators. Such scenarios require high responsiveness and limited control state at the constrained nodes. We argue that the NDN request-response pattern which prevents data push is vital for IoT networks. We contribute HoP-and-Pull (HoPP), a robust publish-subscribe scheme for typical IoT scenarios that targets IoT networks consisting of hundreds of resource constrained devices at intermittent connectivity. Our approach limits the FIB tables to a minimum and naturally supports mobility, temporary network partitioning, data aggregation and near real-time reactivity. We experimentally evaluate the protocol in a real-world deployment using the IoT-Lab testbed with varying numbers of constrained devices, each wirelessly interconnected via IEEE 802.15.4 LowPANs. Implementations are built on CCN-lite with RIOT and support experiments using various single- and multi-hop scenarios
    • …
    corecore