843 research outputs found

    An RMS for Non-predictably Evolving Applications

    Get PDF
    International audienceNon-predictably evolving applications are applications that change their resource requirements during execution. These applications exist, for example, as a result of using adaptive numeric methods, such as adaptive mesh refinement and adaptive particle methods. Increasing interest is being shown to have such applications acquire resources on the fly. However, current HPC Resource Management Systems (RMSs) only allow a static allocation of resources, which cannot be changed after it started. Therefore, non-predictably evolving applications cannot make efficient use of HPC resources, being forced to make an allocation based on their maximum expected requirements. This paper presents CooRMv2, an RMS which supports efficient scheduling of non-predictably evolving applications. An application can make "pre-allocations" to specify its peak resource usage. The application can then dynamically allocate resources as long as the pre-allocation is not outgrown. Resources which are pre-allocated but not used, can be filled by other applications. Results show that the approach is feasible and leads to a more efficient resource usage

    Towards Scheduling Evolving Applications

    Get PDF
    International audienceMost high-performance computing resource managers only allow applications to request a static allocation of resources. However, evolving applications have resource requirements which change (evolve) during their execution. Currently, such applications are forced to make an allocation based on their peak resource requirements, which leads to an inefficient resource usage. This paper studies whether it makes sense for resource managers to support evolving applications. It focuses on scheduling fully-predictably evolving applications on homogeneous resources, for which it proposes several algorithms and evaluates them based on simulations. Results show that resource usage and application response time can be significantly improved with short scheduling times

    Enhancing the performance of malleable MPI applications by using performance-aware dynamic reconfiguration

    Get PDF
    The work in this paper focuses on providing malleability to MPI applications by using a novel performance-aware dynamic reconfiguration technique. This paper describes the design and implementation of Flex-MPI, an MPI library extension which can automatically monitor and predict the performance of applications, balance and redistribute the workload, and reconfigure the application at runtime by changing the number of processes. Unlike existent approaches, our reconfiguring policy is guided by user-defined performance criteria. We focus on iterative SPMD programs, a class of applications with critical mass within the scientific community. Extensive experiments show that Flex-MPI can improve the performance, parallel efficiency, and cost-efficiency of MPI programs with a minimal effort from the programmer.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under the project TIN2013- 41350-P, Scalable Data Management Techniques for High-End Computing Systems, and EU under the COST Program Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)Peer ReviewedPostprint (author's final draft

    A Model Of Visual Recognition Implemented Using Neural Networks

    Get PDF
    The ability to recognise and classify objects in the environment is an important property of biological vision. It is highly desirable that artificial vision systems also have this ability. This thesis documents research into the use of artificial neural networks to implement a prototype model of visual object recognition. The prototype model, describing a computtional architecture, is derived from relevant physiological and psychological data, and attempts to resolve the use of structural decomposition and invariant feature detection. To validate the research a partial implementation of the model has been constructed using multiple neural networks. A linear feed-forward network performs pre-procesing after being trained to approximate a conventional statistical data compression algorithm. The output of this pre-processing forms a feature vector that is categorised using an Adaptive Resonance Theory network capable of recognising arbitrary analog patterns. The implementation has been applied to the task of recognising static images of human faces. Experimental results show that the implementation is able to achieve a 100% successful recognition rate with performance that degrades gracefully. The implentation is robust against facial changes minor occlusions and it is flexible enough to categorise data from any domain

    Benchmarking real-time distributed object management systems for evolvable and adaptable command and control applications

    Get PDF
    Abstract This paper describes benchmarking for evolvable and adaptable real-time command and control systems Introduction MITRE's Evolvable Real-Time C3 initiative developed an approach that would enable current real-time systems to evolve into the systems of the future. We designed and implemented an infrastructure and data manager so that various applications could be hosted on the infrastructure. Then we completed a follow-on effort to design flexible adaptable distributed object management systems for command and control (C2) systems. Such an adaptable system would switch scheduling algorithms, policies, and protocols depending on the need and the environment. Both initiatives were carried out for the United States Air Force. One of the key contributions of our work is the investigation of real-time features for distributed object management systems. Partly as a result of our work we are now seeing various real-time distributed object management products being developed. In selecting a real-time distributed object management systems, we need to analyze various criteria. Therefore, we need benchmarking studies for realtime distributed object management systems. Although benchmarking systems such as Hartstone and Distributed Hartstone have been developed for middleware systems, these systems are not developed specifically for distributed object-based middleware. Since much of our work is heavily based on distributed objects, we developed benchmarking systems by adapting the Hartstone system. This paper describes out effort on developing benchmarks. In section 2 we discuss Distributed Hartstone. Then in section 3, we first provide background on the original Hartstone and DHartstone designs from SEI (Software Engineering Institute) and CMU (Carnegie Mellon University). We then describe our design and modification of DHartstone to incorporate the capability to benchmark real-time middleware in Section 4. Sections 5 and 6 describe the design of the benchmarking systems. For more details of our work on benchmarking and experimental results we refer to [MAUR98] and [MAUR99]. For background information of our work we refer t

    High accuracy ultrasonic degradation monitoring

    Get PDF
    This thesis is concerned with maximising the precision of permanently installed ultrasonic time of flight sensors. Numerous sources of uncertainty affecting the measurement precision were considered and a measurement protocol was suggested to minimise variability. The repeatability that can be achieved with the described measurement protocol was verified in simulations and in laboratory corrosion experiments as well as various other experiments. One of the most significant and complex problems affecting the precision, inner wall surface roughness, was also investigated and a signal processing method was proposed to improve the accuracy of estimated wall thickness loss rates by an order of magnitude compared to standard methods. It was found that the error associated with temperature effects is the most significant among typical experimental sources of uncertainty (e.g. coherent noise and coupling stability). By implementing temperature compensation, it was shown in laboratory experiments that wall thickness can be estimated with a standard deviation of less than 20 nm when temperature is stable (within 0.1 C) using the signal processing protocol described in this thesis. In more realistic corrosion experiments, where temperature changes were of the order of 4 C), it was shown that a wall thickness loss of 1 micron can be detected reliably by applying the same measurement protocol. Another major issue affecting both accuracy and precision is changing inner wall surface morphology. Ultrasonic wave reflections from rough inner surfaces result in distorted signals. These distortions significantly affect the accuracy of wall thickness estimates. A new signal processing method, Adaptive Cross-Correlation (AXC), was described to mitigate the effects of such distortions. It was shown that AXC reduces measurement errors of wall thickness loss rates by an order of magnitude compared to standard signal processing methods so that mean wall loss can be accurately determined. When wall thickness loss is random and spatially uniform, 90% of wall thickness rates measured using AXC lie within 7.5 ± 18% of the actual slope. This means that with mean corrosion rates of 1 mm/year, the wall thickness estimate with AXC would be of the order of 0.75-1.1 mm/year. In addition, the feasibility of increasing the accuracy of wall thickness loss rate measurements even further was demonstrated using multiple sensors for measuring a single wall thickness loss rate. It was shown that measurement errors can be decreased to 30% of the variability of a single sensor. The main findings of this thesis have led to 1) a solid understanding of the numerous factors that affect accuracy and precision of wall thickness loss monitoring, 2) a robust signal acquisition protocol as well as 3) AXC, a post processing technique that improves the monitoring accuracy by an order of magnitude. This will benefit corrosion mitigation around the world, which is estimated to cost a developed nation in excess of 2-5% of its GDP. The presented techniques help to reduce response times to detect industrially actionable corrosion rates of 0.1 mm/year to a few days. They therefore help to minimise the risk of process fluid leakage and increase overall confidence in asset management.Open Acces
    • …
    corecore