5 research outputs found

    Advances in Engineering Software for Multicore Systems

    Get PDF
    The vast amounts of data to be processed by today’s applications demand higher computational power. To meet application requirements and achieve reasonable application performance, it becomes increasingly profitable, or even necessary, to exploit any available hardware parallelism. For both new and legacy applications, successful parallelization is often subject to high cost and price. This chapter proposes a set of methods that employ an optimistic semi-automatic approach, which enables programmers to exploit parallelism on modern hardware architectures. It provides a set of methods, including an LLVM-based tool, to help programmers identify the most promising parallelization targets and understand the key types of parallelism. The approach reduces the manual effort needed for parallelization. A contribution of this work is an efficient profiling method to determine the control and data dependences for performing parallelism discovery or other types of code analysis. Another contribution is a method for detecting code sections where parallel design patterns might be applicable and suggesting relevant code transformations. Our approach efficiently reports detailed runtime data dependences. It accurately identifies opportunities for parallelism and the appropriate type of parallelism to use as task-based or loop-based

    A Language and Preprocessor for User-Controlled Generation of Synthetic Programs

    Get PDF
    We describe Genesis, a language for the generation of synthetic programs. The language allows users to annotate a template program to customize its code using statistical distributions and to generate program instances based on those distributions. This effectively allows users to generate programs whose characteristics vary in a statistically controlled fashion, thus improving upon existing program generators and alleviating the difficulties associated with ad hoc methods of program generation. We describe the language constructs, a prototype preprocessor for the language, and five case studies that show the ability of Genesis to express a range of programs. We evaluate the preprocessor’s performance and the statistical quality of the samples it generates. We thereby show that Genesis is a useful tool that eases the expression and creation of large and diverse program sets

    Parallel Testing in Behavior Driven Development

    Get PDF
    The testing process consists of activities that demand efforts asproducing, executing, and validating test scenarios. Covering alltest scenarios manually is unfeasible since it is error-prone andlabor-expensive. Thereby, partial or complete automation reducescosts and increases tests’ effectiveness. The increasing availabilityof hardware resources provides opportunities to scale testingusing parallel execution of test cases or suites blocks. Some toolsperform parallel execution of tests, but their use requires complicatedsettings, and when combined with some methodologies asBehavior-Driven Development, it may create an overhead for users.This paper presents the Multi-Threaded Testing (MTT) tool for parallelexecution of test scenarios in the context of Behavior-DrivenDevelopment that aims to reduce the computational time requiredto test Java projects. Furthermore, the present paper reports anexperimental study to evaluate the MTT tool’s performance intwo different hardware configurations. Our results demonstrate theMTT reached a speedup of 4,59 using ten threads in CPU Intel Corei5-9300H with an efficiency of 46%, and a speedup of 3,45 with anefficiency of 43% using eight threads in CPU Intel Core i7-7700HQ

    A Fortran Kernel Generation Framework for Scientific Legacy Code

    Get PDF
    Quality assurance procedure is very important for software development. The complexity of modules and structure in software impedes the testing procedure and further development. For complex and poorly designed scientific software, module developers and software testers need to put a lot of extra efforts to monitor not related modules\u27 impacts and to test the whole system\u27s constraints. In addition, widely used benchmarks cannot help programmers with accurate and program specific system performance evaluation. In this situation, the generated kernels could provide considerable insight into better performance tuning. Therefore, in order to greatly improve the productivity of various scientific software engineering tasks such as performance tuning, debugging, and verification of simulation results, we developed an automatic compute kernel extraction prototype platform for complex legacy scientific code. In addition, considering that scientific research and experiment require long-term simulation procedure and the huge size of data transfer, we apply message passing based parallelization and I/O behavior optimization to highly improve the performance of the kernel extractor framework and then use profiling tools to give guidance for parallel distribution. Abnormal event detection is another important aspect for scientific research; dealing with huge observational datasets combined with simulation results it becomes not only essential but also extremely difficult. In this dissertation, for the sake of detecting high frequency event and low frequency events, we reconfigured this framework equipped with in-situ data transfer infrastructure. Through the method of combining signal processing data preprocess(decimation) with machine learning detection model to train the stream data, our framework can significantly decrease the amount of transferred data demand for concurrent data analysis (between distributed computing CPU/GPU nodes). Finally, the dissertation presents the implementation of the framework and a case study of the ACME Land Model (ALM) for demonstration. It turns out that the generated compute kernel with lower cost can be used in performance tuning experiments and quality assurance, which include debugging legacy code, verification of simulation results through single point and multiple points of variables tracking, collaborating with compiler vendors, and generating custom benchmark tests

    Automatic Generation of Parallel Unit Tests

    Get PDF
    Abstract—Multithreaded software is subject to data races. Currently available data race detectors report such errors to the developer, but consume large amounts of time and memory; many approaches are not applicable for large software projects. Unit tests containing fractions of the program lead to better results. We propose AutoRT, an approach to automatically generate parallel unit tests as target for data race detectors from existing programs. AutoRT uses the Single Static Multiple Dynamic (SSMD) analysis pattern to reduce complexity and can therefore be used efficiently even in large software projects. We evaluate AutoRT using Microsoft CHESS and show that with SSMD all 110 data races contained in our sample programs can be located
    corecore