97,883 research outputs found

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library

    Full text link
    Remote data access for data analysis in high performance computing is commonly done with specialized data access protocols and storage systems. These protocols are highly optimized for high throughput on very large datasets, multi-streams, high availability, low latency and efficient parallel I/O. The purpose of this paper is to describe how we have adapted a generic protocol, the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative for high performance I/O and data analysis applications in a global computing grid: the Worldwide LHC Computing Grid. In this work, we first analyze the design differences between the HTTP protocol and the most common high performance I/O protocols, pointing out the main performance weaknesses of HTTP. Then, we describe in detail how we solved these issues. Our solutions have been implemented in a toolkit called davix, available through several recent Linux distributions. Finally, we describe the results of our benchmarks where we compare the performance of davix against a HPC specific protocol for a data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho

    Mathematical and computer modeling of electro-optic systems using a generic modeling approach

    Get PDF
    The conventional approach to modelling electro-optic sensor systems is to develop separate models for individual systems or classes of system, depending on the detector technology employed in the sensor and the application. However, this ignores commonality in design and in components of these systems. A generic approach is presented for modelling a variety of sensor systems operating in the infrared waveband that also allows systems to be modelled with different levels of detail and at different stages of the product lifecycle. The provision of different model types (parametric and image-flow descriptions) within the generic framework can allow valuable insights to be gained

    Integrated sensor and controller framework : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Information and Telecommunications Engineering at Massey University, Palmerston North, New Zealand

    Get PDF
    This thesis presents a software platform to integrate sensors, controllers, actuators and instrumentation within a common framework. This provides a flexible, reusable, reconfigurable and sealable system for designers to use as a base for any sensing and control platform. The purpose of the framework is to decrease system development time, and allow more time to be spent on designing the control algorithms, rather than implementing the system. The architecture is generic, and finds application in many areas such as home, office and factory automation, process and environmental monitoring, surveillance and robotics. The framework uses a data driven design, which separates the data storage areas (dataslots) from the components of the framework that process the data (processors). By separating all the components of the framework in this way, it allows a flexible configuration. When a processor places data into a dataslot, the dataslot queues all the processors that use that data to run. A system that is based on this framework is configured by a text file. All the components are defined in the file, with the interactions between them. The system can be thought of as multiple boxes, with the text file defining how these boxes are connected together. This allows rapid configuration of the system, as separate text files can be maintained for different configurations. A text file is used for the configuration instead of a graphical environment to simplify the development process, and to reduce development time. One potential limitation of the approach of separating the computational components is an increased overhead or latency. It is acknowledged that this is an important consideration in many control applications, so the framework is designed to minimise the latency through implementation of prioritized queues and multitasking. This prevents one slow component from degrading the performance of the rest of the system. The operation of the framework is demonstrated through a range of different applications. These show some of the key features including: acquiring data, handling multiple dataslots that a processor reads from or writes to, controlling actuators, how the virtual instrumentation works, network communications, where controllers fit into the framework, data logging, image and video dataslots. timers and dynamically linked libraries. A number of experiments show the framework under real conditions. The framework's data passing mechanisms are demonstrated, a simple control and data logging application is shown and an image processing application is shown to demonstrate the system under load. The latency of the framework is also determined. These illustrate how the framework would operate under different hardware and software applications. Work can still be done on the framework, as extra features can be added to improve the usability. Overall, this thesis presents a flexible system to integrate sensors, actuators, instrumentation and controllers that can be utilised in a wide range of applications
    • …
    corecore