5 research outputs found

    The Optimization of Geotechnical Site Investigations for Pile Design in Multiple Layer Soil Profiles Using a Risk-Based Approach

    Get PDF
    The testing of subsurface material properties, i.e. a geotechnical site investigation, is a crucial part of projects that are located on or within the ground. The process consists of testing samples at a variety of locations, in order to model the performance of an engineering system for design processes. Should these models be inaccurate or unconservative due to an improper investigation, there is considerable risk of consequences such as structural collapse, construction delays, litigation, and over-design. However, despite these risks, there are relatively few quantitative guidelines or research items on informing an explicit, optimal investigation for a given foundation and soil profile. This is detrimental, as testing scope is often minimised in an attempt to reduce expenditure, thereby increasing the aforementioned risks. This research recommends optimal site investigations for multi-storey buildings supported by pile foundations, for a variety of structural configurations and soil profiles. The recommendations include that of the optimal test type, number of tests, testing locations, and interpretation of test data. The framework consists of a risk-based approach, where an investigation is considered optimal if it results in the lowest total project cost, incorporating both the cost of testing, and that associated with any expected negative consequences. The analysis is statistical in nature, employing Monte Carlo simulation and the use of randomly generated virtual soils through random field theory, as well as finite element analysis for pile assessment. A number of innovations have been developed to assist the novel nature of the work. For example, a new method of producing randomly generated multiple-layer soils has been devised. This work is the first instance of site investigations being optimised in multiple-layer soils, which are considerably more complex than the single-layer soils examined previously. Furthermore, both the framework and the numerical tools have been themselves extensively optimised for speed. Efficiency innovations include modifying the analysis to produce re-usable pile settlement curves, as opposed to designing and assessing the piles directly. This both reduces the amount of analysis required and allows for flexible post-processing for different conditions. Other optimizations include the elimination of computationally expensive finite element analysis from within the Monte Carlo simulations, and additional minor improvements. Practicing engineers can optimise their site investigations through three outcomes of this research. Firstly, optimal site investigation scopes are known for the numerous specific cases examined throughout this document, and the resulting inferred recommendations. Secondly, a rule-of-thumb guideline has been produced, suggesting the optimal number of tests for buildings of all sizes in a single soil case of intermediate variability. Thirdly, a highly efficient and versatile software tool, SIOPS, has been produced, allowing engineers to run a simplified version of the analysis for custom soils and buildings. The tool can do almost all the analysis shown throughout the thesis, including the use of a genetic algorithm to optimise testing locations. However, it is approximately 10 million times faster than analysis using the original framework, running on a single-core computer within minutes.Thesis (Ph.D.) -- University of Adelaide, School of Civil, Environmental and Mining Engineering, 202

    On the vectorization of FIR filterbanks

    Get PDF
    This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursive), and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and C, in order to explore different aspects of the proposed technique

    Optimization techniques for fine-grained communication in PGAS environments

    Get PDF
    Partitioned Global Address Space (PGAS) languages promise to deliver improved programmer productivity and good performance in large-scale parallel machines. However, adequate performance for applications that rely on fine-grained communication without compromising their programmability is difficult to achieve. Manual or compiler assistance code optimization is required to avoid fine-grained accesses. The downside of manually applying code transformations is the increased program complexity and hindering of the programmer productivity. On the other hand, compiler optimizations of fine-grained accesses require knowledge of physical data mapping and the use of parallel loop constructs. This thesis presents optimizations for solving the three main challenges of the fine-grain communication: (i) low network communication efficiency; (ii) large number of runtime calls; and (iii) network hotspot creation for the non-uniform distribution of network communication, To solve this problems, the dissertation presents three approaches. First, it presents an improved inspector-executor transformation to improve the network efficiency through runtime aggregation. Second, it presents incremental optimizations to the inspector-executor loop transformation to automatically remove the runtime calls. Finally, the thesis presents a loop scheduling loop transformation for avoiding network hotspots and the oversubscription of nodes. In contrast to previous work that use static coalescing, prefetching, limited privatization, and caching, the solutions presented in this thesis focus cover all the aspect of fine-grained communication, including reducing the number of calls generated by the compiler and minimizing the overhead of the inspector-executor optimization. A performance evaluation with various microbenchmarks and benchmarks, aiming at predicting scaling and absolute performance numbers of a Power 775 machine, indicates that applications with regular accesses can achieve up to 180% of the performance of hand-optimized versions, while in applications with irregular accesses the transformations are expected to yield from 1.12X up to 6.3X speedup. The loop scheduling shows performance gains from 3-25% for NAS FT and bucket-sort benchmarks, and up to 3.4X speedup for the microbenchmarks

    Compilation techniques for irregular problems on parallel machines

    Get PDF
    Massively parallel computers have ushered in the era of teraflop computing. Even though large and powerful machines are being built, they are used by only a fraction of the computing community. The fundamental reason for this situation is that parallel machines are difficult to program. Development of compilers that automatically parallelize programs will greatly increase the use of these machines.;A large class of scientific problems can be categorized as irregular computations. In this class of computation, the data access patterns are known only at runtime, creating significant difficulties for a parallelizing compiler to generate efficient parallel codes. Some compilers with very limited abilities to parallelize simple irregular computations exist, but the methods used by these compilers fail for any non-trivial applications code.;This research presents development of compiler transformation techniques that can be used to effectively parallelize an important class of irregular programs. A central aim of these transformation techniques is to generate codes that aggressively prefetch data. Program slicing methods are used as a part of the code generation process. In this approach, a program written in a data-parallel language, such as HPF, is transformed so that it can be executed on a distributed memory machine. An efficient compiler runtime support system has been developed that performs data movement and software caching

    Характеристика локальности параллельных реализаций многомерных циклов

    Get PDF
    Введены и исследованы величины, характеризующие локальность (а следовательно и коммуникационные затраты) параллельных реализаций многомерных циклов произвольной структуры вложенности. Получены условия локализации в виртуальных процессорах входных и промежуточных данных для многократного использовани
    corecore