72 research outputs found

    The SCC and the SICSA multi-core challenge

    Get PDF
    Two phases of the SICSA Multi-core Challenge have gone past. The first challenge was to produce concordances of books for sequences of words up to length N; and the second to simulate the motion of N celestial bodies under gravity. We took both challenges on the SCC, using C and the Linux Shell. This paper is an account of the experiences gained. It also gives a shorter account of the performance of other systems on the same set of problems, as they provide benchmarks against which the SCC performance can be compared with

    A trustworthy mobile agent infrastructure for network management

    Get PDF
    Despite several advantages inherent in mobile-agent-based approaches to network management as compared to traditional SNMP-based approaches, industry is reluctant to adopt the mobile agent paradigm as a replacement for the existing manager-agent model; the management community requires an evolutionary, rather than a revolutionary, use of mobile agents. Furthermore, security for distributed management is a major concern; agent-based management systems inherit the security risks of mobile agents. We have developed a Java-based mobile agent infrastructure for network management that enables the safe integration of mobile agents with the SNMP protocol. The security of the system has been evaluated under agent to agent-platform and agent to agent attacks and has proved trustworthy in the performance of network management tasks

    UDRF: Multi-resource Fairness for Complex Jobs with Placement Constraints

    Get PDF
    In this paper, we study the problem of multi- resource fairness in systems running complex jobs that consist of multiple interconnected tasks. A job is considered finished when all its corresponding tasks have been executed in the system. Tasks can have different resource requirements. Because of special demands on particular hardware or software, tasks may have placement constraints limiting the type of machines they can run on. We develop User-Dependence Dominant Resource Fairness (UDRF), a generalized version of max-min fairness that combines graph theory and the notion of dominant re- source shares to ensure multi-resource fairness between complex workflows. UDRF satisfies several desirable properties including strategy proofness, which ensures that users do not benefit from misreporting their true resource demands. We propose an offline algorithm that computes optimal UDRF allocation. But optimality comes at a cost, especially for systems where schedulers need to make thousands of online scheduling decisions per second. Therefore, we develop a lightweight online algorithm that closely approximates UDRF. Besides that, we propose a simple mechanism to decentralize the UDRF scheduling process across multiple schedulers. Large-scale simulations driven by Google cluster-usage traces show that UDRF achieves better resource utilization and throughput compared to the current state-of-the-art in fair resource allocation

    DEBS Grand Challenge: Glasgow Automata Illustrated

    Get PDF
    The challenge is solved using Glasgow automata, concise complex event processing engines executable in the context of a topic-based publish/subscribe cache of event streams and relations. The imperative programming style of the Glasgow Automaton Programming Language (GAPL) enables multiple, efficient realisations of the two challenge queries

    An elementary proposition on the dynamic routing problem in wireless networks of sensors

    Get PDF
    The routing problem (finding an optimal route from one point in a computer network to another) is surrounded by impossibility results. These results are usually expressed as lower and upper bounds on the set of nodes (or the set of links) of a network and represent the complexity of a solution to the routing problem (a routing function). The routing problem dealt with here, in particular, is a dynamic one (it accounts for network dynamics) and concerns wireless networks of sensors. Sensors form wireless links of limited capacity and time-variable quality to route messages amongst themselves. It is desired that sensors self-organize ad hoc in order to successfully carry out a routing task, e.g. provide daily soil erosion reports for a monitored watershed, or provide immediate indications of an imminent volcanic eruption, in spite of network dynamics. Link dynamics are the first barrier to finding an optimal route between a node x and a node y in a sensor network. The uncertainty of the outcome (the best next hop) of a routing function lies partially with the quality fluctuations of wireless links. Take, for example, a static network. It is known that, given the set of nodes and their link weights (or costs), a node can compute optimal routes by running, say, Dijkstra's algorithm. Link dynamics however suggest that costs are not static. Hence, sensors need a metric (a measurable quantity of uncertainty) to monitor for fluctuations, either improvements or degradations of quality or load; when a fluctuation is sufficiently large (say, by Delta), sensors ought to update their costs and seek another route. Therein lies the other fundamental barrier to find an optimal route - complexity. A crude argument would suggest that sensors (and their links) have an upper bound on the number of messages they can transmit, receive and store due to resource constraints. Such messages can be application traffic, in which case it is desirable, or control traffic, in which case it should be kept minimal. The first type of traffic is demand, and a user should provision for it accordingly. The second type of traffic is overhead, and it is necessary if a routing system (or scheme) is to ensure its fidelity to the application requirements (policy). It is possible for a routing scheme to approximate optimal routes (by Delta) by reducing its message and/or memory complexity. The common denominator of the routing problem and the desire to minimize overhead while approximating optimal routes is Delta, the deviation (or stretch) of a computed route from an optimal one, as computed by a node that has instantaneous knowledge of the set of all nodes and their interaction costs (an oracle). This dissertation deals with both problems in unison. To do so, it needs to translate the policy space (the user objectives) into a metric space (routing objectives). It does so by means of a cost function that normalizes metrics into a number of hops. Then it proceeds to devise, design, and implement a scheme that computes minimum-hop-count routes with manageable complexity. The theory presented is founded on (well-ordered) sets with respect to an elementary proposition, that a route from a source x to a destination y can be computed either by y sending an advertisement to the set of all nodes, or by x sending a query to the set of all nodes; henceforth the proactive method (of y) and the reactive method (of x), respectively. The debate between proactive and reactive routing protocols appears in many instances of the routing problem (e.g. routing in mobile networks, routing in delay-tolerant networks, compact routing), and it is focussed on whether nodes should know a priori all routes and then select the best one (with the proactive method), or each node could simply search for a (hopefully best) route on demand (with the reactive method). The proactive method is stateful, as it requires the entire metric space - the set of nodes and their interaction costs - in memory (in a routing table). The routes computed by the proactive method are optimal and the lower and upper bounds of proactive schemes match those of an oracle. Any attempt to reduce the proactive overhead, e.g. by introducing hierarchies, will result in sub-optimal routes (of known stretch). The reactive method is stateless, as it requires no information whatsoever to compute a route. Reactive schemes - at least as they are presently understood - compute sub-optimal routes (and thus far, of unknown stretch). This dissertation attempts to answer the following question: "what is the least amount of state required to compute an optimal route from a source to a destination?" A hybrid routing scheme is used to investigate this question, one that uses the proactive method to compute routes to near destinations and the reactive method for distant destinations. It is shown that there are cases where hybrid schemes can converge to optimal routes, despite possessing incomplete routing state, and that the necessary and sufficient condition to compute optimal routes with local state alone is related neither to the size nor the density of a network; it is rather the circumference (the size of the largest cycle) of a network that matters. Counterexamples, where local state is insufficient, are discussed to derive the worst-case stretch. The theory is augmented with simulation results and a small experimental testbed to motivate the discussion on how policy space (user requirements) can translate into metric spaces and how different metrics affect performance. On the debate between proactive and reactive protocols, it is shown that the two classes are equivalent. The dissertation concludes with a discussion on the applicability of its results and poses some open problems

    A conceptual information sharing framework to improve supply chain security collaboration

    Get PDF
    Modern Supply Chains are critical in terms of efficiency, economic activities and commercial impact, particularly in case of security incidents. Inland terminals, commercial ports and dry ports constitute key gateways for the transportation flows in these modern supply chains and are require enhanced security procedures. This paper develops a framework that facilitates the sharing of information among various supply chain stakeholders, which is expected to improve the security level from a value chain perspective. In this context, we propose the upgrade of the current security strategies utilizing existing processes, equipment in order to minimise time and cost currently needed but more importantly improving the level of security in the supply chain. A conceptual rule and role-based data fusion framework is developed enabling the seamless and timely exchange of messages. The proposed Data Fusion Framework has a simple architecture that supports quick integration to either network-based, distributed systems or conventional stand-alone systems and adheres to common data fusion principles. The proposed framework considers different components (e.g. sensors, algorithms and fusing procedures) in an equipment agnostic approach so as to enable easy access and easy usage of security information.N/
    • …
    corecore