3,278 research outputs found

    Efficient and Reasonable Object-Oriented Concurrency

    Full text link
    Making threaded programs safe and easy to reason about is one of the chief difficulties in modern programming. This work provides an efficient execution model for SCOOP, a concurrency approach that provides not only data race freedom but also pre/postcondition reasoning guarantees between threads. The extensions we propose influence both the underlying semantics to increase the amount of concurrent execution that is possible, exclude certain classes of deadlocks, and enable greater performance. These extensions are used as the basis an efficient runtime and optimization pass that improve performance 15x over a baseline implementation. This new implementation of SCOOP is also 2x faster than other well-known safe concurrent languages. The measurements are based on both coordination-intensive and data-manipulation-intensive benchmarks designed to offer a mixture of workloads.Comment: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE '15). ACM, 201

    Master/worker parallel discrete event simulation

    Get PDF
    The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.Ph.D.Committee Chair: Fujimoto, Richard; Committee Member: Bader, David; Committee Member: Perumalla, Kalyan; Committee Member: Riley, George; Committee Member: Vuduc, Richar

    Process tracking for dynamic tuning applications on the grid

    Get PDF
    Los recursos computacionales requeridos por la comunidad científica para solucionar problemas son mayores que los ofrecidos por la infraestructura actual. La necesidad de mayores prestaciones se debe al constante progreso de la investigación, nuevos problemas o aumento del detalle en los problemas corrientes. Usuarios crean nuevos sistemas distribuidos en larga escala como sistemas Grid para lograr prestaciones deseadas. Sistemas Grid son generalmente construidos sobre los recursos computacionales disponibles como clusters, maquinas paralelas o dispositivos de almacenamiento distribuidos en diferentes organizaciones e interconectado por una red. Sintonizar aplicaciones en un sistema Grid no es fácil debido a las características de distribución de procesos en múltiples clusters controlados por diferentes sistemas de colas y heterogeneidad de la red de comunicaciones. Nosotros tenemos un entorno de monitorización, análisis y sintonización (MATE) que permite la sintonización dinámica de aplicaciones en entornos cluster. Debido a las muchas capas de software presente en sistemas Grid, dos ejecuciones de una misma aplicación pueden usar recursos distintos. Para sintonizar los procesos de la aplicación, nuestra herramienta debe localizar y seguir la ejecución de los procesos en el sistema. Nosotros llamamos eso como problema de localización de procesos. Este artículo presenta la integración de MATE con Gris y dos aproximaciones implementadas para solucionar el problema de localización de procesos dentro de sistemas Grid.The computational resources need by the scientific community to solve problems is beyond the current available infrastructure. Performance requirements are needed due constant research progress, new problems studies or detail increase of the current ones. Users create new wide distributed systems such as computational Grids to achieve desired performance goals. Grid systems are generally built on top of available computational resources as cluster, parallel machines or storage devices distributed within different organizations and those resources are interconnected by a network. Tune applications on Grid environment is a hard task due system characteristics like multi-cluster job distribution among different local schedulers and dynamic network bandwidth behavior. We had a Monitoring, Analysis and Tuning Environment (MATE) that allows dynamic performance tuning applications within a cluster. Due to the many software layers present on the grid, similar job submission may execute on different places. To tune application jobs, our tool needs to locate and follow the jobs execution within the system. We call this a process tracking problem. This paper presents MATE integration to the Grid and the two process tracking approaches implemented in order to solve the process tracking problem within Grid systemsVII Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    The Application of Agent-Based Technology to Packaging Line

    Get PDF
    Control systems used by manufacturing companies today are often centralized. In such a system, the controller is concentrated to one location. As the production lines in Tetra Pak becomes more and more complex with growing costumer demand, a centralized approach becomes more and more inadequate. Agent-based technology provides a way to implement a desirable, robust and decentralized manufacturing environment. Agents can be viewed as generalizations of objects in object oriented programming, which takes decisions based on own intelligence as well as others. A community of agents, efficiently cooperating in order to reach a higher level or global goal, is called a MAS (Multi Agent System). This thesis deals with applying an agent based solution into the Tetra Pak A3 packaging line. A thorough study of decentralized systems in manufacturing environments is made which finally leads to the choosing of CBR (Case Based Reasoning) as reasoning paradigm for the age! nts. Agents adapt solutions to new problems by initializing voting procedures among all agents in the community. FIPA (the Foundation for Intelligent Physical Agents) provides specifications regarding infrastructure of the architecture surrounding the agents with focus on communication and organization.Studies of seven different cases is carried out to verify the strength of the control policies developed. Agent behaviour and line configuration is simulated in the Line Simulator implemented in Matlab

    Big data workflows: Locality-aware orchestration using software containers

    Get PDF
    The emergence of the Edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing Big Data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the Edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo Workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.publishedVersio

    Big data workflows: Locality-aware orchestration using software containers

    Get PDF
    The emergence of the Edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing Big Data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the Edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo Workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.publishedVersio
    corecore