233,630 research outputs found

    High-performance computing and communication models for solving the complex interdisciplinary problems on DPCS

    Get PDF
    The paper presents some advanced high performance (HPC) and parallel computing (PC) methodologies for solving a large space complex problem involving the integrated difference research areas. About eight interdisciplinary problems will be accurately solved on multiple computers communicating over the local area network. The mathematical modeling and a large sparse simulation of the interdisciplinary effort involve the area of science, engineering, biomedical, nanotechnology, software engineering, agriculture, image processing and urban planning. The specific methodologies of PC software under consideration include PVM, MPI, LUNA, MDC, OpenMP, CUDA and LINDA integrated with COMSOL and C++/C. There are different communication models of parallel programming, thus some definitions of parallel processing, distributed processing and memory types are explained for understanding the main contribution of this paper. The matching between the methodology of PC and the large sparse application depends on the domain of solution, the dimension of the targeted area, computational and communication pattern, the architecture of distributed parallel computing systems (DPCS), the structure of computational complexity and communication cost. The originality of this paper lies in obtaining the complex numerical model dealing with a large scale partial differential equation (PDE), discretization of finite difference (FDM) or finite element (FEM) methods, numerical simulation, high-performance simulation and performance measurement. The simulation of PDE will perform by sequential and parallel algorithms to visualize the complex model in high-resolution quality. In the context of a mathematical model, various independent and dependent parameters present the complex and real phenomena of the interdisciplinary application. As a model executes, these parameters can be manipulated and changed. As an impact, some chemical or mechanical properties can be predicted based on the observation of parameter changes. The methodologies of parallel programs build on the client-server model, slave-master model and fragmented model. HPC of the communication model for solving the interdisciplinary problems above will be analyzed using a flow of the algorithm, numerical analysis and the comparison of parallel performance evaluations. In conclusion, the integration of HPC, communication model, PC software, performance and numerical analysis happens to be an important approach to fulfill the matching requirement and optimize the solution of complex interdisciplinary problems

    Image processing methods and architectures in diagnostic pathology.

    Get PDF
    Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory

    OLTP on Hardware Islands

    Get PDF
    Modern hardware is abundantly parallel and increasingly heterogeneous. The numerous processing cores have non-uniform access latencies to the main memory and to the processor caches, which causes variability in the communication costs. Unfortunately, database systems mostly assume that all processing cores are the same and that microarchitecture differences are not significant enough to appear in critical database execution paths. As we demonstrate in this paper, however, hardware heterogeneity does appear in the critical path and conventional database architectures achieve suboptimal and even worse, unpredictable performance. We perform a detailed performance analysis of OLTP deployments in servers with multiple cores per CPU (multicore) and multiple CPUs per server (multisocket). We compare different database deployment strategies where we vary the number and size of independent database instances running on a single server, from a single shared-everything instance to fine-grained shared-nothing configurations. We quantify the impact of non-uniform hardware on various deployments by (a) examining how efficiently each deployment uses the available hardware resources and (b) measuring the impact of distributed transactions and skewed requests on different workloads. Finally, we argue in favor of shared-nothing deployments that are topology- and workload-aware and take advantage of fast on-chip communication between islands of cores on the same socket.Comment: VLDB201

    A cost-effective heuristic to schedule local and remote memory in cluster computers

    Full text link
    Cluster computers represent a cost-effective alternative solution to supercomputers. In these systems, it is common to constrain the memory address space of a given processor to the local motherboard. Constraining the system in this way is much cheaper than using a full-fledged shared memory implementation among motherboards. However, memory usage among motherboards can be unfairly balanced. On the other hand, remote memory access (RMA) hardware provides fast interconnects among the motherboards of a cluster. RMA devices can be used to access remote RAM memory from a local motherboard. This work focuses on this capability in order to achieve a better global use of the total RAM memory in the system. More precisely, the address space of local applications is extended to remote motherboards and is used to access remote RAM memory. This paper presents an ideal memory scheduling algorithm and proposes a cost-effective heuristic to allocate local and remote memory among local applications. Compared to the devised ideal algorithm, the heuristic obtains the same (or closely resembling) results while largely reducing the computational cost. In addition, we analyze the impact on the performance of stand alone applications varying the memory distribution among regions (local, local to board, and remote). Then, this study is extended to any number of concurrent applications. Experimental results show that a QoS parameter is needed in order to avoid unacceptable performance degradation. © 2011 Springer Science+Business Media, LLC.This work was supported by Spanish CICYT under Grant TIN2009-14475-C04-01 and by Consolider-Ingenio under Grant CSD2006-00046.Serrano Gómez, M.; Sahuquillo Borrás, J.; Petit Martí, SV.; Hassan Mohamed, H.; Duato Marín, JF. (2012). A cost-effective heuristic to schedule local and remote memory in cluster computers. Journal of Supercomputing. 59(3):1533-1551. https://doi.org/10.1007/s11227-011-0566-8S15331551593IBM journal of Research and Development staff (2008) Overview of the IBM blue gene/P project. IBM J Res Dev 52(1/2):199–220Blocksome M, Archer C, Inglett T, McCarthy P, Mundy M, Ratterman J, Sidelnik A, Smith B, Almási G, Castaños J, Lieber D, Moreira J, Krishnamoorthy S, Tipparaju V, Nieplocha J (2006) Design and implementation of a one-sided communication interface for the IBM eServer Blue Gene® supercomputer. In: Proceedings of the 2006 ACM/IEEE conference on supercomputing, SC ’06, Tampa, FL, USA, November 2006, pp 54–54Kumar S, Dózsa G, Almasi G, Heidelberger P, Chen D, Giampapa M, Blocksome M, Faraj A, Parker J, Ratterman J, Smith BE, Archer C (2008) The deep computing messaging framework: generalized scalable message passing on the blue gene/P supercomputer. In: Proceedings of the 22nd annual international conference on supercomputing, Island of Kos, Greece, June 2008, pp 94–103Tipparaju V, Kot A, Nieplocha J, Bruggencate MT, Chrisochoides N (2007) Evaluation of remote memory access communication on the cray XT3. In: Proceedings of the 21th international parallel and distributed processing symposium, Long Beach, California, USA, March 2007, pp 1–7Nussle M, Scherer M, Bruning U (2009) A resource optimized remote-memory-access architecture for low-latency communication. In: International conference on parallel processing, Sept 2009, pp 220–227http://www.hypertransport.org/Serrano M, Sahuquillo J, Hassan H, Petit S, Duato J (2010) A scheduling heuristic to handle local and remote memory in cluster computers. In: Proceedings of the 12th IEEE international conference on high performance computing, Melbourne, Australia, Sept 2010, pp 35–42Keltcher CN, McGrath KJ, Ahmed A, Conway P (2003) The AMD opteron processor for multiprocessor servers. IEEE MICRO 23(2):66–76Duato J, Silla F, Yalamanchili S (2009) Extending hypertransport protocol for improved scalability. In: First international workshop on hypertransport research and applications.Litz H, Fröening H, Nuessle M, Brüening U (2007) A hypertransport network interface controller for ultra-low latency message transfers. HyperTransport Consortium White Paperhttps://www.simics.net/http://www.cs.wisc.edu/gems/http://www.cs.virginia.edu/stream/Woo SC, Ohara M, Torrie E, Singh JP, Gupta A (1995) The SPLASH-2 programs: Characterization and methodological considerations. In: Proceedings of the 22nd annual international symposium on computer architecture, New York, NY, USA, 1995, pp 24–36Levitin A (2003) Introduction to the design and analysis of algorithms. Addison Wesley, ReadingOleszkiewicz J, Xiao L, Liu Y (2004) Parallel network RAM: Effectively utilizing global cluster memory for large data-intensive parallel programs. In: Proceedings of 33rd international conference on parallel processing, Montreal, Quebec, Canada, pp 353–360Liang S, Noronha R, Panda DK (2005) Swapping to remote memory over infiniband: An approach using a high performance network block device. In: Proceedings of the 2005 IEEE international conference on cluster computing, Boston, Massachusetts, USA, pp 1–10Oguchi M, Kitsuregawa M (2000) Using available remote memory dynamically for parallel data mining application on ATM-connected PC cluster. In: Proceedings of the 14th international parallel & distributed processing symposium, Cancun, Mexico, pp 411–420Werstein P, Jia X, Huang Z (2007) A remote memory swapping system for cluster computers. In: Proceedings of the eighth international conference on parallel and distributed computing, applications and technologies, Adelaide, Australia, pp 75–81Midorikawa H, Kurokawa M, Himeno R, Sato M (2008) DLM: A distributed large memory system using remote memory swapping over cluster nodes. In: Proceedings of the 2008 IEEE international conference on cluster computing, Tsukuba, Japan, October 2008, pp 268–27

    A cluster computer performance predictor for memory scheduling

    Full text link
    Remote Memory Access (RMA) hardware allow a given motherboard in a cluster to directly access the memory installed in a remote motherboard of the same cluster. In recent works, this characteristic has been used to extend the addressable memory space of selected motherboards, which enable a better balance of main memory resources among cluster applications. This way is much more cost-effective than than implementing a full-fledged shared memory system. In this context, the memory scheduler is in charge of finding a suitable distribution of local and remote memory that maximizes the performance and guarantees a minimum QoS among the applications. Note that since changing the memory distribution is a slow process involving several motherboards, the memory scheduler needs to make sure that the target distribution provides better performance than the current one. In this paper, a performance predictor is designed in order to find the best memory distribution for a given set of applications executing in a cluster motherboard. The predictor uses simple hardware counters to estimate the expected impact on performance of the different memory distributions. The hardware counters provide the predictor with the information about the time spent in processor, memory access and network. The performance model used by the predictor has been validated in a detailed microarchitectural simulator using real benchmarks. Results show that the prediction accuracy never deviates more than 5% compared to the real results, being less than 0.5% in most of the cases.This work was supported by Spanish CICYT under Grant TIN2009-14475-C04-01, and by Consolider-Ingenio under Grant CSD2006-00046Serrano Gómez, M.; Sahuquillo Borrás, J.; Hassan Mohamed, H.; Petit Martí, SV.; Duato Marín, JF. (2011). A cluster computer performance predictor for memory scheduling. En Algorithms and Architectures for Parallel Processing. Springer Verlag (Germany). 7017:353-362. doi:10.1007/978-3-642-24669-2_34S3533627017Meuer, H.W.: The top500 project: Looking back over 15 years of supercomputing experience. Informatik-Spektrum 31, 203–222 (2008), doi:10.1007/s00287-008-0240-6Nussle, M., Scherer, M., Bruning, U.: A Resource Optimized Remote-Memory-Access Architecture for Low-latency Communication. In: International Conference on Parallel Processing, pp. 220–227 (September 2009)Blocksome, M., Archer, C., Inglett, T., McCarthy, P., Mundy, M., Ratterman, J., Sidelnik, A., Smith, B., Almási, G., Castaños, J., Lieber, D., Moreira, J., Krishnamoorthy, S., Tipparaju, V., Nieplocha, J.: Design and implementation of a one-sided communication interface for the IBM eServer Blue Gene®supercomputer. In: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, p. 120. ACM, New York (2006)Kumar, S., Dózsa, G., Almasi, G., Heidelberger, P., Chen, D., Giampapa, M., Blocksome, M., Faraj, A., Parker, J., Ratterman, J., Smith, B.E., Archer, C.: The deep computing messaging framework: generalized scalable message passing on the blue gene/P supercomputer. In: ICS, pp. 94–103 (2008)Tipparaju, V., Kot, A., Nieplocha, J., Bruggencate, M.T., Chrisochoides, N.: Evaluation of Remote Memory Access Communication on the Cray XT3. In: IEEE International Parallel and Distributed Processing Symposium, pp. 1–7 (March 2007)HyperTransport Technology Consortium. HyperTransport I/O Link Specification Revision (October 3, 2008)Serrano, M., Sahuquillo, J., Hassan, H., Petit, S., Duato, J.: A scheduling heuristic to handle local and remote memory in cluster computers. In: High Performance Computing and Communications (2010) (accepted for publication)Serrano, M., Sahuquillo, J., Petit, S., Hassan, H., Duato, J.: A cost-effective heuristic to schedule local and remote memory in cluster computers. The Journal of Supercomputing, 1–19 (2011), doi:10.1007/s11227-011-0566-8Ubal, R., Sahuquillo, J., Petit, S., López, P.: Multi2Sim: A Simulation Framework to Evaluate Multicore-Multithreaded Processors. In: Proceedings of the 19th International Symposium on Computer Architecture and High Performance Computing (2007)Keltcher, C.N., McGrath, K.J., Ahmed, A., Conway, P.: The AMD Opteron Processor for Multiprocessor Servers. IEEE Micro 23(2), 66–76 (2003)Duato, J., Silla, F., Yalamanchili, S.: Extending HyperTransport Protocol for Improved Scalability. In: First International Workshop on HyperTransport Research and Applications (2009)Litz, H., Fröening, H., Nuessle, M., Brüening, U.: A HyperTransport Network Interface Controller for Ultra-low Latency Message Transfers. In: HyperTransport Consortium White Paper (2007)Zhuravlev, S., Blagodurov, S., Fedorova, A.: Addressing shared resource contention in multicore processors via scheduling. In: Proceedings of the 15th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 129–142 (2010)Xie, Y., Loh, G.H.: Dynamic Classification of Program Memory Behaviors in CMPs. In: 2nd Workshop on Chip Multiprocessor Memory Systems and Interconnects in conjunction with the 35th International Symposium on Computer Architecture (2008)Xu, C., Chen, X., Dick, R.P., Mao, Z.M.: Cache contention and application performance prediction for multi-core systems. In: IEEE International Symposium on Performance Analysis of Systems and Software, pp. 76–86 (2010)Rai, J.K., Negi, A., Wankar, R., Nayak, K.D.: Performance prediction on multi-core processors. In: 2010 International Conference on Computational Intelligence and Communication Networks (CICN), pp. 633–637 (November 2010)Liang, S., Noronha, R., Panda, D.K.: Swapping to Remote Memory over InfiniBand: An Approach using a High Performance Network Block Device. In: CLUSTER, pp. 1–10. IEEE, Los Alamitos (2005)Werstein, P., Jia, X., Huang, Z.: A Remote Memory Swapping System for Cluster Computers. In: Eighth International Conference on Parallel and Distributed Computing, Applications and Technologies, pp. 75–81 (2007)Midorikawa, H., Kurokawa, M., Himeno, R., Sato, M.: DLM: A distributed Large Memory System using remote memory swapping over cluster nodes. In: IEEE International Conference on Cluster Computing, pp. 268–273 (October 2008

    PowerPack: Energy Profiling and Analysis of High-Performance Systems and Applications

    Get PDF
    Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance

    Characterization of the Impact of Hardware Islands on OLTP

    Get PDF
    Modern hardware is abundantly parallel and increasingly heterogeneous. The numerous processing cores have non-uniform access latencies to the main memory and processor caches, which causes variability in the communication costs. Unfortunately, database systems mostly assume that all processing cores are the same and that microarchitecture differences are not significant enough to appear in critical database execution paths. As we demonstrate in this paper, however, non-uniform core topology does appear in the critical path and conventional database architectures achieve suboptimal and even worse, unpredictable performance. We perform a detailed performance analysis of OLTP deployments in servers with multiple cores per CPU (multicore) and multiple CPUs per server (multisocket). We compare different database deployment strategies where we vary the number and size of independent database instances running on a single server, from a single shared-everything instance to fine-grained shared-nothing configurations. We quantify the impact of non-uniform hardware on various deployments by (a) examining how efficiently each deployment uses the available hardware resources and (b) measuring the impact of distributed transactions and skewed requests on different workloads. We show that no strategy is optimal for all cases and that the best choice depends on the combination of hardware topology and workload characteristics. Finally, we argue that transaction processing systems must be aware of the hardware topology in order to achieve predictably high performance
    • …
    corecore