29,975 research outputs found

    Towards a fully automated computation of RG-functions for the 3-dd O(N) vector model: Parametrizing amplitudes

    Full text link
    Within the framework of field-theoretical description of second-order phase transitions via the 3-dimensional O(N) vector model, accurate predictions for critical exponents can be obtained from (resummation of) the perturbative series of Renormalization-Group functions, which are in turn derived --following Parisi's approach-- from the expansions of appropriate field correlators evaluated at zero external momenta. Such a technique was fully exploited 30 years ago in two seminal works of Baker, Nickel, Green and Meiron, which lead to the knowledge of the β\beta-function up to the 6-loop level; they succeeded in obtaining a precise numerical evaluation of all needed Feynman amplitudes in momentum space by lowering the dimensionalities of each integration with a cleverly arranged set of computational simplifications. In fact, extending this computation is not straightforward, due both to the factorial proliferation of relevant diagrams and the increasing dimensionality of their associated integrals; in any case, this task can be reasonably carried on only in the framework of an automated environment. On the road towards the creation of such an environment, we here show how a strategy closely inspired by that of Nickel and coworkers can be stated in algorithmic form, and successfully implemented on the computer. As an application, we plot the minimized distributions of residual integrations for the sets of diagrams needed to obtain RG-functions to the full 7-loop level; they represent a good evaluation of the computational effort which will be required to improve the currently available estimates of critical exponents.Comment: 54 pages, 17 figures and 4 table

    Insight into High-quality Aerodynamic Design Spaces through Multi-objective Optimization

    Get PDF
    An approach to support the computational aerodynamic design process is presented and demonstrated through the application of a novel multi-objective variant of the Tabu Search optimization algorithm for continuous problems to the aerodynamic design optimization of turbomachinery blades. The aim is to improve the performance of a specific stage and ultimately of the whole engine. The integrated system developed for this purpose is described. This combines the optimizer with an existing geometry parameterization scheme and a well- established CFD package. The system’s performance is illustrated through case studies – one two-dimensional, one three-dimensional – in which flow characteristics important to the overall performance of turbomachinery blades are optimized. By showing the designer the trade-off surfaces between the competing objectives, this approach provides considerable insight into the design space under consideration and presents the designer with a range of different Pareto-optimal designs for further consideration. Special emphasis is given to the dimensionality in objective function space of the optimization problem, which seeks designs that perform well for a range of flow performance metrics. The resulting compressor blades achieve their high performance by exploiting complicated physical mechanisms successfully identified through the design process. The system can readily be run on parallel computers, substantially reducing wall-clock run times – a significant benefit when tackling computationally demanding design problems. Overall optimal performance is offered by compromise designs on the Pareto trade-off surface revealed through a true multi-objective design optimization test case. Bearing in mind the continuing rapid advances in computing power and the benefits discussed, this approach brings the adoption of such techniques in real-world engineering design practice a ste

    Commercialisation of precision agriculture technologies in the macadamia industry

    Get PDF
    A prototype vision-based yield monitor has been developed for the macadamia industry. The system estimates yield for individual trees by detecting nuts and their harvested location. The technology was developed by the National Centre for Engineering in Agriculture, University of Southern Queensland for the purpose of reducing labour and costs in varietal assessment trials where yield for individual trees are required to be measured to indicate tree performance. The project was commissioned by Horticulture Australia Limited

    Aeronautical Engineering: A special bibliography with indexes, supplement 62

    Get PDF
    This bibliography lists 306 reports, articles, and other documents introduced into the NASA scientific and technical information system in September 1975

    Studying Solutions of the p-Median Problem for the Location of Public Bike Stations

    Get PDF
    The use of bicycles as a means of transport is becoming more and more popular today, especially in urban areas, to avoid the disadvantages of individual car traffic. In fact, city managers react to this trend and actively promote the use of bicycles by providing a network of bicycles for public use and stations where they can be stored. Establishing such a network involves the task of finding best locations for stations, which is, however, not a trivial task. In this work, we examine models to determine the best location of bike stations so that citizens will travel the shortest distance possible to one of them. Based on real data from the city of Malaga, we formulate our problem as a p-median problem and solve it with a variable neighborhood search algorithm that was automatically configured with irace. We compare the locations proposed by the algorithm with the real ones used currently by the city council. We also study where new locations should be placed if the network grows.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This research was partially funded by the University of Málaga, Andalucı́a Tech, the Spanish MINECO and FEDER projects: TIN2014- 57341-R, TIN2016-81766-REDT, and TIN2017-88213-R. C. Cintrano is supported by a FPI grant (BES-2015-074805) from Spanish MINECO

    Introducing Molly: Distributed Memory Parallelization with LLVM

    Get PDF
    Programming for distributed memory machines has always been a tedious task, but necessary because compilers have not been sufficiently able to optimize for such machines themselves. Molly is an extension to the LLVM compiler toolchain that is able to distribute and reorganize workload and data if the program is organized in statically determined loop control-flows. These are represented as polyhedral integer-point sets that allow program transformations applied on them. Memory distribution and layout can be declared by the programmer as needed and the necessary asynchronous MPI communication is generated automatically. The primary motivation is to run Lattice QCD simulations on IBM Blue Gene/Q supercomputers, but since the implementation is not yet completed, this paper shows the capabilities on Conway's Game of Life
    corecore