2,151 research outputs found

    Probabilistic grid scheduling based on job statistics and monitoring information

    Get PDF
    This transfer thesis presents a novel, probabilistic approach to scheduling applications on computational Grids based on their historical behaviour, current state of the Grid and predictions of the future execution times and resource utilisation of such applications. The work lays a foundation for enabling a more intuitive, user-friendly and effective scheduling technique termed deadline scheduling. Initial work has established motivation and requirements for a more efficient Grid scheduler, able to adaptively handle dynamic nature of the Grid resources and submitted workload. Preliminary scheduler research identified the need for a detailed monitoring of Grid resources on the process level, and for a tool to simulate non-deterministic behaviour and statistical properties of Grid applications. A simulation tool, GridLoader, has been developed to enable modelling of application loads similar to a number of typical Grid applications. GridLoader is able to simulate CPU utilisation, memory allocation and network transfers according to limits set through command line parameters or a configuration file. Its specific strength is in achieving set resource utilisation targets in a probabilistic manner, thus creating a dynamic environment, suitable for testing the scheduler’s adaptability and its prediction algorithm. To enable highly granular monitoring of Grid applications, a monitoring framework based on the Ganglia Toolkit was developed and tested. The suite is able to collect resource usage information of individual Grid applications, integrate it into standard XML based information flow, provide visualisation through a Web portal, and export data into a format suitable for off-line analysis. The thesis also presents initial investigation of the utilisation of University College London Central Computing Cluster facility running Sun Grid Engine middleware. Feasibility of basic prediction concepts based on the historical information and process meta-data have been successfully established and possible scheduling improvements using such predictions identified. The thesis is structured as follows: Section 1 introduces Grid computing and its major concepts; Section 2 presents open research issues and specific focus of the author’s research; Section 3 gives a survey of the related literature, schedulers, monitoring tools and simulation packages; Section 4 presents the platform for author’s work – the Self-Organising Grid Resource management project; Sections 5 and 6 give detailed accounts of the monitoring framework and simulation tool developed; Section 7 presents the initial data analysis while Section 8.4 concludes the thesis with appendices and references

    Hardware Architectures for Low-power In-Situ Monitoring of Wireless Embedded Systems

    Get PDF
    As wireless embedded systems transition from lab-scale research prototypes to large-scale commercial deployments, providing reliable and dependable system operation becomes absolutely crucial to ensure successful adoption. However, the untethered nature of wireless embedded systems severely limits the ability to access, debug, and control device operation after deployment—post-deployment or in-situ visibility. It is intuitive that the more information we have about a system’s operation after deployment, the better/faster we can respond upon the detection of anomalous behavior. Therefore, post-deployment visibility is a foundation upon which other runtime reliability techniques can be built. However, visibility into system operation diminishes significantly once the devices are remotely deployed, and we refer to this problem as a lack of post-deployment visibility

    Adaptive structured parallelism

    Get PDF
    Algorithmic skeletons abstract commonly-used patterns of parallel computation, communication, and interaction. Parallel programs are expressed by interweaving parameterised skeletons analogously to the way in which structured sequential programs are developed, using well-defined constructs. Skeletons provide top-down design composition and control inheritance throughout the program structure. Based on the algorithmic skeleton concept, structured parallelism provides a high-level parallel programming technique which allows the conceptual description of parallel programs whilst fostering platform independence and algorithm abstraction. By decoupling the algorithm specification from machine-dependent structural considerations, structured parallelism allows programmers to code programs regardless of how the computation and communications will be executed in the system platform.Meanwhile, large non-dedicated multiprocessing systems have long posed a challenge to known distributed systems programming techniques as a result of the inherent heterogeneity and dynamism of their resources. Scant research has been devoted to the use of structural information provided by skeletons in adaptively improving program performance, based on resource utilisation. This thesis presents a methodology to improve skeletal parallel programming in heterogeneous distributed systems by introducing adaptivity through resource awareness. As we hypothesise that a skeletal program should be able to adapt to the dynamic resource conditions over time using its structural forecasting information, we have developed ASPara: Adaptive Structured Parallelism. ASPara is a generic methodology to incorporate structural information at compilation into a parallel program, which will help it to adapt at execution

    On Random Sampling for Compliance Monitoring in Opportunistic Spectrum Access Networks

    Get PDF
    In the expanding spectrum marketplace, there has been a long term evolution towards more market€“oriented mechanisms, such as Opportunistic Spectrum Access (OSA), enabled through Cognitive Radio (CR) technology. However, the potential of CR technologies to revolutionize wireless communications, also introduces challenges based upon the potentially non€“deterministic CR behaviour in the Electrospace. While establishing and enforcing compliance to spectrum etiquette rules are essential to realization of successful OSA networks in the future, there has only been recent increased research activity into enforcement. This dissertation presents novel work on the spectrum monitoring aspect, which is crucial to effective enforcement of OSA. An overview of the challenges faced by current compliance monitoring methods is first presented. A framework is then proposed for the use of random spectral sampling techniques to reduce data collection complexity in wideband sensing scenarios. This approach is recommended as an alternative to Compressed Sensing (CS) techniques for wideband spectral occupancy estimation, which may be difficult to utilize in many practical congested scenarios where compliance monitoring is required. Next, a low€“cost computational approach to online randomized temporal sensing deployment is presented for characterization of temporal spectrum occupancy in cognitive radio scenarios. The random sensing approach is demonstrated and its performance is compared to CS€“based approach for occupancy estimation. A novel frame€“based sampling inversion technique is then presented for cases when it is necessary to track the temporal behaviour of individual CRs or CR networks. Parameters from randomly sampled Physical Layer Convergence Protocol (PLCP) data frames are used to reconstruct occupancy statistics, taking account of missed frames due to sampling design, sensor limitations and frame errors. Finally, investigations into the use of distributed and mobile spectrum sensing to collect spatial diversity to improve the above techniques are presented, for several common monitoring tasks in spectrum enforcement. Specifically, focus is upon techniques for achieving consensus in dynamic topologies such as in mobile sensing scenarios

    On realistic target coverage by autonomous drones

    Get PDF
    Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. To achieve the full potential of such drones, it is necessary to develop new enhanced formulations of both common and emerging sensing scenarios. Namely, several fundamental challenges in visual sensing are yet to be solved including (1) fitting sizable targets in camera frames; (2) positioning cameras at effective viewpoints matching target poses; and (3) accounting for occlusion by elements in the environment, including other targets. In this article, we introduce Argus, an autonomous system that utilizes drones to collect target information incrementally through a two-tier architecture. To tackle the stated challenges, Argus employs a novel geometric model that captures both target shapes and coverage constraints. Recognizing drones as the scarcest resource, Argus aims to minimize the number of drones required to cover a set of targets. We prove this problem is NP-hard, and even hard to approximate, before deriving a best-possible approximation algorithm along with a competitive sampling heuristic which runs up to 100× faster according to large-scale simulations. To test Argus in action, we demonstrate and analyze its performance on a prototype implementation. Finally, we present a number of extensions to accommodate more application requirements and highlight some open problems

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Data-driven design of intelligent wireless networks: an overview and tutorial

    Get PDF
    Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves

    Economic-based Distributed Resource Management and Scheduling for Grid Computing

    Full text link
    Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications

    Intelligent deployment strategies for passive underwater sensor networks

    Get PDF
    Passive underwater sensor networks are often used to monitor a general area of the ocean, a port or military installation, or to detect underwater vehicles near a high value unit at sea, such as a fuel ship or aircraft carrier. Deploying an underwater sensor network across a large area of interest (AOI), for military surveillance purposes, is a significant challenge due to the inherent difficulties posed by the underwater channel in terms of sensing and communications between sensors. Moreover, monetary constraints, arising from the high cost of these sensors and their deployment, limit the number of available sensors. As a result, sensor deployment must be done as efficiently as possible. The objective of this work is to develop a deployment strategy for passive underwater sensors in an area clearance scenario, where there is no apparent target for an adversary to gravitate towards, such as a ship or a port, while considering all factors pertinent to underwater sensor deployment. These factors include sensing range, communications range, monetary costs, link redundancy, range dependence, and probabilistic visitation. A complete treatment of the underwater sensor deployment problem is presented in this work from determining the purpose of the sensor field to physically deploying the sensors. Assuming a field designer is given a suboptimal number of sensors, they must be methodically allocated across an AOI. The Game Theory Field Design (GTFD) model, proposed in this work, is able to accomplish this task by evaluating the acoustic characteristics across the AOI and allocating sensors accordingly. Since GTFD considers only circular sensing coverage regions, an extension is proposed to consider irregularly shaped regions. Sensor deployment locations are planned using a proposed evolutionary approach, called the Underwater Sensor Deployment Evolutionary Algorithm, which utilizes two suitable network topologies, mesh and cluster. The effects of these topologies, and a sensor\u27s communications range, on the sensing capabilities of a sensor field, are also investigated. Lastly, the impact of deployment imprecision on the connectivity of an underwater sensor field, using a mesh topology, is analyzed, for cases where sensor locations after deployment do not exactly coincide with planned sensor locations
    • 

    corecore