15,825 research outputs found

    Data Partitioning and Load Balancing in Parallel Disk Systems

    Get PDF
    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces. Keywords: parallel disk systems, performance tuning, file striping, data allocation, load balancing, disk cooling. 1 Introduction: Tuning Issues in Parallel Disk Systems Parallel disk systems are of great imp..

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Kinematic calibration of Orthoglide-type mechanisms from observation of parallel leg motions

    Get PDF
    The paper proposes a new calibration method for parallel manipulators that allows efficient identification of the joint offsets using observations of the manipulator leg parallelism with respect to the base surface. The method employs a simple and low-cost measuring system, which evaluates deviation of the leg location during motions that are assumed to preserve the leg parallelism for the nominal values of the manipulator parameters. Using the measured deviations, the developed algorithm estimates the joint offsets that are treated as the most essential parameters to be identified. The validity of the proposed calibration method and efficiency of the developed numerical algorithms are confirmed by experimental results. The sensitivity of the measurement methods and the calibration accuracy are also studied
    • …
    corecore