39,920 research outputs found

    A voyage to Arcturus: a model for automated management of a WLCG Tier-2 facility

    Get PDF
    With the current trend towards "On Demand Computing" in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on "off the shelf" software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems

    The RAppArmor Package: Enforcing Security Policies in R Using Dynamic Sandboxing on Linux

    Get PDF
    The increasing availability of cloud computing and scientific super computers brings great potential for making R accessible through public or shared resources. This allows us to efficiently run code requiring lots of cycles and memory, or embed R functionality into, e.g., systems and web services. However some important security concerns need to be addressed before this can be put in production. The prime use case in the design of R has always been a single statistician running R on the local machine through the interactive console. Therefore the execution environment of R is entirely unrestricted, which could result in malicious behavior or excessive use of hardware resources in a shared environment. Properly securing an R process turns out to be a complex problem. We describe various approaches and illustrate potential issues using some of our personal experiences in hosting public web services. Finally we introduce the RAppArmor package: a Linux based reference implementation for dynamic sandboxing in R on the level of the operating system

    Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform

    Get PDF
    In this paper, we provide details of implementing a system for managing a fleet of autonomous mobile robots (AMR) operating in a factory or a warehouse premise. While the robots are themselves autonomous in its motion and obstacle avoidance capability, the target destination for each robot is provided by a global planner. The global planner and the ground vehicles (robots) constitute a multi agent system (MAS) which communicate with each other over a wireless network. Three different approaches are explored for implementation. The first two approaches make use of the distributed computing based Networked Robotics architecture and communication framework of Robot Operating System (ROS) itself while the third approach uses Rapyuta Cloud Robotics framework for this implementation. The comparative performance of these approaches are analyzed through simulation as well as real world experiment with actual robots. These analyses provide an in-depth understanding of the inner working of the Cloud Robotics Platform in contrast to the usual ROS framework. The insight gained through this exercise will be valuable for students as well as practicing engineers interested in implementing similar systems else where. In the process, we also identify few critical limitations of the current Rapyuta platform and provide suggestions to overcome them.Comment: 14 pages, 15 figures, journal pape

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure

    Contrasting Community Building in Sponsored and Community Founded Open Source Projects

    Get PDF
    Prior characterizations of open source projects have been based on the model of a community-founded project. More recently, a second model has emerged, where organizations spinout internally developed code to a public forum. Based on field work on open source projects, we compare the lifecycle differences between these two models. We identify problems unique to spinout projects, particularly in attracting and building an external community. We illustrate these issues with a feasibility analysis of a proposed open source project based on VistA, the primary healthcare information system of the U.S. Department of Veterans Affairs. This example illuminates the complexities of building a community after a code base has been developed and suggests that open source software can be used to transfer technology to the private sector
    • …
    corecore