54 research outputs found

    An energy-aware scheduling approach for resource-intensive jobs using smart mobile devices as resource providers

    Get PDF
    The ever-growing adoption of smart mobile devices is a worldwide phenomenon that positions smart-phones and tablets as primary devices for communication and Internet access. In addition to this, the computing capabilities of such devices, often underutilized by their owners, are in continuous improvement. Today, smart mobile devices have multi-core CPUs, several gigabytes of RAM, and ability to communicate through several wireless networking technologies. These facts caught the attention of researchers who have proposed to leverage smart mobile devices aggregated computing capabilities for running resource intensive software. However, such idea is conditioned by key features, named singularities in the context of this thesis, that characterize resource provision with smart mobile devices.These are the ability of devices to change location (user mobility), the shared or non-dedicated nature of resources provided (lack of ownership) and the limited operation time given by the finite energy source (exhaustible resources).Existing proposals materializing this idea differ in the singularities combinations they target and the way they address each singularity, which make them suitable for distinct goals and resource exploitation opportunities. The latter are represented by real life situations where resources provided by groups of smart mobile devices can be exploited, which in turn are characterized by a social context and a networking support used to link and coordinate devices. The behavior of people in a given social context configure a special availability level of resources, while the underlying networking support imposes restrictionson how information flows, computational tasks are distributed and results are collected. The latter constitutes one fundamental difference of proposals mainly because each networking support ?i.e., ad-hoc and infrastructure based? has its own application scenarios. Aside from the singularities addressed and the networking support utilized, the weakest point of most of the proposals is their practical applicability. The performance achieved heavily relies on the accuracy with which task information, including execution time and/or energy required for execution, is provided to feed the resource allocator.The expanded usage of wireless communication infrastructure in public and private buildings, e.g., shoppings, work offices, university campuses and so on, constitutes a networking support that can be naturally re-utilized for leveraging smart mobile devices computational capabilities. In this context, this thesisproposal aims to contribute with an easy-to-implement  scheduling approach for running CPU-bound applications on a cluster of smart mobile devices. The approach is aware of the finite nature of smart mobile devices energy, and it does not depend on tasks information to operate. By contrast, it allocatescomputational resources to incoming tasks using a node ranking-based strategy. The ranking weights nodes combining static and dynamic parameters, including benchmark results, battery level, number of queued tasks, among others. This node ranking-based task assignment, or first allocation phase, is complemented with a re-balancing phase using job stealing techniques. The second allocation phase is an aid to the unbalanced load provoked as consequence of the non-dedicated nature of smart mobile devices CPU usage, i.e., the effect of the owner interaction, tasks heterogeneity, and lack of up-to-dateand accurate information of remaining energy estimations. The evaluation of the scheduling approach is through an in-vitro simulation. A novel simulator which exploits energy consumption profiles of real smart mobile devices, as well as, fluctuating CPU usage built upon empirical models, derived from real users interaction data, is another major contribution. Tests that validate the simulation tool are provided and the approach is evaluated in scenarios varying the composition of nodes, tasks and nodes characteristics including different tasks arrival rates, tasks requirements and different levels of nodes resource utilization.Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; Argentin

    Development and analysis of a homogeneous long-term precipitation network (1850-2015) and assessment of historic droughts for the island of Ireland

    Get PDF
    Long-term precipitation series are critical for understanding emerging changes to the hydrological cycle. Given the paucity of long-term quality assured precipitation records in Ireland this thesis expands the existing catalogue of long-term monthly precipitation records for the Island by recovering and digitising archived data. Following bridging and updating, 25 stations are quality assured and homogenised using state-of-the-art methods and scrutiny of station metadata. Assessment of variability and change in the homogenised and extended precipitation records for the period 1850-2010 reveals positive (winter) and negative (summer) trends. Trends in records covering the typical period of digitisation (1941 onwards) are not always representative of longer records. Using this quality assured network of precipitation stations together with proxy rainfall reconstructions a 250-year historic drought catalogue is established using the Standardised Precipitation Index (SPI). Documentary sources, particularly newspaper archives, spanning the last 250 years are used to (i) add confidence to the quantitative detection of drought episodes and (ii) gain insight to the socio-economic impacts of historic droughts. During the years 1850-2015 seven major drought rich periods with an island wide fingerprint are identified in 1854-1860, 1884-1896, 1904-1912, 1921-1924, 1932-1935, 1952-1954 and 1969-1977. These events exhibit substantial diversity in terms of drought development, severity and spatial occurrence. Results show that Ireland is drought prone but recent decades are unrepresentative of the longer-term drought climatology. Finally, long-term homogenous precipitation records are further utilised to reconstruct river flows at twelve study catchments to 1850. Reconstructed flows are analysed to identify periods of hydrological drought and the potential of different SPI accumulations to forecast severe drought are explored. Results demonstrate the importance of catchment characteristics in moderating the effects of meteorological drought and highlight the potential for drought forecasting in groundwater dominated catchments. The body of work presented considerably advances understanding of the long-term hydro-climatology of a sentinel location in Europe and provides datasets and tools for more resilient water management

    Development and analysis of a homogeneous long-term precipitation network (1850-2015) and assessment of historic droughts for the island of Ireland

    Get PDF
    Long-term precipitation series are critical for understanding emerging changes to the hydrological cycle. Given the paucity of long-term quality assured precipitation records in Ireland this thesis expands the existing catalogue of long-term monthly precipitation records for the Island by recovering and digitising archived data. Following bridging and updating, 25 stations are quality assured and homogenised using state-of-the-art methods and scrutiny of station metadata. Assessment of variability and change in the homogenised and extended precipitation records for the period 1850-2010 reveals positive (winter) and negative (summer) trends. Trends in records covering the typical period of digitisation (1941 onwards) are not always representative of longer records. Using this quality assured network of precipitation stations together with proxy rainfall reconstructions a 250-year historic drought catalogue is established using the Standardised Precipitation Index (SPI). Documentary sources, particularly newspaper archives, spanning the last 250 years are used to (i) add confidence to the quantitative detection of drought episodes and (ii) gain insight to the socio-economic impacts of historic droughts. During the years 1850-2015 seven major drought rich periods with an island wide fingerprint are identified in 1854-1860, 1884-1896, 1904-1912, 1921-1924, 1932-1935, 1952-1954 and 1969-1977. These events exhibit substantial diversity in terms of drought development, severity and spatial occurrence. Results show that Ireland is drought prone but recent decades are unrepresentative of the longer-term drought climatology. Finally, long-term homogenous precipitation records are further utilised to reconstruct river flows at twelve study catchments to 1850. Reconstructed flows are analysed to identify periods of hydrological drought and the potential of different SPI accumulations to forecast severe drought are explored. Results demonstrate the importance of catchment characteristics in moderating the effects of meteorological drought and highlight the potential for drought forecasting in groundwater dominated catchments. The body of work presented considerably advances understanding of the long-term hydro-climatology of a sentinel location in Europe and provides datasets and tools for more resilient water management

    On designing large, secure and resilient networked systems

    Get PDF
    2019 Summer.Includes bibliographical references.Defending large networked systems against rapidly evolving cyber attacks is challenging. This is because of several factors. First, cyber defenders are always fighting an asymmetric warfare: While the attacker needs to find just a single security vulnerability that is unprotected to launch an attack, the defender needs to identify and protect against all possible avenues of attacks to the system. Various types of cost factors, such as, but not limited to, costs related to identifying and installing defenses, costs related to security management, costs related to manpower training and development, costs related to system availability, etc., make this asymmetric warfare even challenging. Second, newer and newer cyber threats are always emerging - the so called zero-day attacks. It is not possible for a cyber defender to defend against an attack for which defenses are yet unknown. In this work, we investigate the problem of designing large and complex networks that are secure and resilient. There are two specific aspects of the problem that we look into. First is the problem of detecting anomalous activities in the network. While this problem has been variously investigated, we address the problem differently. We posit that anomalous activities are the result of mal-actors interacting with non mal-actors, and such anomalous activities are reflected in changes to the topological structure (in a mathematical sense) of the network. We formulate this problem as that of Sybil detection in networks. For our experimentation and hypothesis testing we instantiate the problem as that of Sybil detection in on-line social networks (OSNs). Sybil attacks involve one or more attackers creating and introducing several mal-actors (fake identities in on-line social networks), called Sybils, into a complex network. Depending on the nature of the network system, the goal of the mal-actors can be to unlawfully access data, to forge another user's identity and activity, or to influence and disrupt the normal behavior of the system. The second aspect that we look into is that of building resiliency in a large network that consists of several machines that collectively provide a single service to the outside world. Such networks are particularly vulnerable to Sybil attacks. While our Sybil detection algorithms achieve very high levels of accuracy, they cannot guarantee that all Sybils will be detected. Thus, to protect against such "residual" Sybils (that is, those that remain potentially undetected and continue to attack the network services), we propose a novel Moving Target Defense (MTD) paradigm to build resilient networks. The core idea is that for large enterprise level networks, the survivability of the network's mission is more important than the security of one or more of the servers. We develop protocols to re-locate services from server to server in a random way such that before an attacker has an opportunity to target a specific server and disrupt it’s services, the services will migrate to another non-malicious server. The continuity of the service of the large network is thus sustained. We evaluate the effectiveness of our proposed protocols using theoretical analysis, simulations, and experimentation. For the Sybil detection problem we use both synthetic and real-world data sets. We evaluate the algorithms for accuracy of Sybil detection. For the moving target defense protocols we implement a proof-of-concept in the context of access control as a service, and run several large scale simulations. The proof-of- concept demonstrates the effectiveness of the MTD paradigm. We evaluate the computation and communication complexity of the protocols as we scale up to larger and larger networks

    The Future of the Operating Room: Surgical Preplanning and Navigation using High Accuracy Ultra-Wideband Positioning and Advanced Bone Measurement

    Get PDF
    This dissertation embodies the diversity and creativity of my research, of which much has been peer-reviewed, published in archival quality journals, and presented nationally and internationally. Portions of the work described herein have been published in the fields of image processing, forensic anthropology, physical anthropology, biomedical engineering, clinical orthopedics, and microwave engineering. The problem studied is primarily that of developing the tools and technologies for a next-generation surgical navigation system. The discussion focuses on the underlying technologies of a novel microwave positioning subsystem and a bone analysis subsystem. The methodologies behind each of these technologies are presented in the context of the overall system with the salient results helping to elucidate the difficult facets of the problem. The microwave positioning system is currently the highest accuracy wireless ultra-wideband positioning system that can be found in the literature. The challenges in producing a system with these capabilities are many, and the research and development in solving these problems should further the art of high accuracy pulse-based positioning

    An investigation into applying ontologies to the UK railway industry

    Get PDF
    The uptake of ontologies in the Semantic Web and Linked Data has proven their excellence in managing mass data. Referring to the movements of Linked Data, ontologies are applied to large complex systems to facilitate better data management. Some industries, e.g., oil and gas, have at-tempted to use ontologies to manage its internal data structure and man-agement. Researchers have dedicated to designing ontologies for the rail system, and they have discussed the potential benefits thereof. However, despite successful establishment in some industries and effort made from some research, plus the interest from major UK rail operation participants, there has not been evidence showing that rail ontologies are applied to the UK rail system. This thesis will analyse factors that hinder the application of rail ontolo-gies to the UK rail system. Based on concluded factors, the rest of the the-sis will present corresponding solutions. The demonstrations show how ontologies can fit in a particular task with improvements, aiming to pro-vide inspiration and insights for the future research into the application of ontology-based system in the UK rail system

    Winona Daily News

    Get PDF
    https://openriver.winona.edu/winonadailynews/2329/thumbnail.jp

    Development of semantic data models to support data interoperability in the rail industry

    Get PDF
    Railways are large, complex systems that comprise many heterogeneous subsystems and parts. As the railway industry continues to enjoy increasing passenger and freight custom, ways of deriving greater value from the knowledge within these subsystems are increasingly sought. Interfaces to and between systems are rare, making data sharing and analysis difficult. Semantic data modelling provides a method of integrating data from disparate sources by encoding knowledge about a problem domain or world into machine-interpretable logic and using this knowledge to encode and infer data context and meaning. The uptake of this technique in the Semantic Web and Linked Data movements in recent years has provided a mature set of techniques and toolsets for designing and implementing ontologies and linked data applications. This thesis demonstrates ways in which semantic data models and OWL ontologies can be used to foster data exchange across the railway industry. It sets out a novel methodology for the creation of industrial semantic models, and presents a new set of railway domain ontologies to facilitate integration of infrastructure-centric railway data. Finally, the design and implementation of two prototype systems is described, each of which use the techniques and ontologies in solving a known problem

    Une approche générique pour l'automatisation des expériences sur les réseaux informatiques

    Get PDF
    This thesis proposes a generic approach to automate network experiments for scenarios involving any networking technology on any type of network evaluation platform. The proposed approach is based on abstracting the experiment life cycle of the evaluation platforms into generic steps from which a generic experiment model and experimentation primitives are derived. A generic experimentation architecture is proposed, composed of an experiment model, a programmable experiment interface and an orchestration algorithm that can be adapted to network simulators, emulators and testbeds alike. The feasibility of the approach is demonstrated through the implementation of a framework capable of automating experiments using any combination of these platforms. Three main aspects of the framework are evaluated: its extensibility to support any type of platform, its efficiency to orchestrate experiments and its flexibility to support diverse use cases including education, platform management and experimentation with multiple platforms. The results show that the proposed approach can be used to efficiently automate experimentation on diverse platforms for a wide range of scenarios.Cette thèse propose une approche générique pour automatiser des expériences sur des réseaux quelle que soit la technologie utilisée ou le type de plate-forme d'évaluation. L'approche proposée est basée sur l'abstraction du cycle de vie de l'expérience en étapes génériques à partir desquelles un modèle d'expérience et des primitives d'expérimentation sont dérivés. Une architecture générique d'expérimentation est proposée, composée d'un modèle d'expérience générique, d'une interface pour programmer des expériences et d'un algorithme d'orchestration qui peux être adapté aux simulateurs, émulateurs et bancs d'essai de réseaux. La faisabilité de cette approche est démontrée par la mise en œuvre d'un framework capable d'automatiser des expériences sur toute combinaison de ces plateformes. Trois aspects principaux du framework sont évalués : son extensibilité pour s'adapter à tout type de plate-forme, son efficacité pour orchestrer des expériences et sa flexibilité pour permettre des cas d'utilisation divers, y compris l'enseignement, la gestion des plate-formes et l'expérimentation avec des plates-formes multiples. Les résultats montrent que l'approche proposée peut être utilisée pour automatiser efficacement l'expérimentation sur les plates-formes d'évaluation hétérogènes et pour un éventail de scénarios variés
    • …
    corecore