347 research outputs found

    Expressing object-oriented concepts in Fortran 90

    Full text link

    Assessment and evaluation of computer science education

    Get PDF

    Extracting UML Class Diagrams from Object-Oriented Fortran: ForUML

    Get PDF
    Many scientists who implement computational science and engineering software have adopted the object-oriented (OO) Fortran paradigm. One of the challenges faced by OO Fortran developers is the inability to obtain high level software design descriptions of existing applications. Knowledge of the overall software design is not only valuable in the absence of documentation, it can also serve to assist developers with accomplishing different tasks during the software development process, especially maintenance and refactoring. The software engineering community commonly uses reverse engineering techniques to deal with this challenge. A number of reverse engineering-based tools have been proposed, but few of them can be applied to OO Fortran applications. In this paper, we propose a software tool to extract unified modeling language (UML) class diagrams from Fortran code. The UML class diagram facilitates the developers' ability to examine the entities and their relationships in the software system. The extracted diagrams enhance software maintenance and evolution. The experiments carried out to evaluate the proposed tool show its accuracy and a few of the limitations

    A Survey on Parallel Architecture and Parallel Programming Languages and Tools

    Get PDF
    In this paper, we have presented a brief review on the evolution of parallel computing to multi - core architecture. The survey briefs more than 45 languages, libraries and tools used till date to increase performance through parallel programming. We ha ve given emphasis more on the architecture of parallel system in the survey

    Multiscale Computation with Interpolating Wavelets

    Full text link
    Multiresolution analyses based upon interpolets, interpolating scaling functions introduced by Deslauriers and Dubuc, are particularly well-suited to physical applications because they allow exact recovery of the multiresolution representation of a function from its sample values on a finite set of points in space. We present a detailed study of the application of wavelet concepts to physical problems expressed in such bases. The manuscript describes algorithms for the associated transforms which, for properly constructed grids of variable resolution, compute correctly without having to introduce extra grid points. We demonstrate that for the application of local homogeneous operators in such bases, the non-standard multiply of Beylkin, Coifman and Rokhlin also proceeds exactly for inhomogeneous grids of appropriate form. To obtain less stringent conditions on the grids, we generalize the non-standard multiply so that communication may proceed between non-adjacent levels. The manuscript concludes with timing comparisons against naive algorithms and an illustration of the scale-independence of the convergence rate of the conjugate gradient solution of Poisson's equation using a simple preconditioning, suggesting that this approach leads to an O(n) solution of this equation.Comment: 33 pages, figures available at http://laisla.mit.edu/muchomas/Papers/nonstand-figs.ps . Updated: (1) figures file (figs.ps) now appear with the posting on the server; (2) references got lost in the last submissio

    Introduction to the polymorphic tracking code: Fibre bundles, polymorphic Taylor types and "Exact tracking"

    Get PDF
    This is a description of the basic ideas behind the ``Polymorphic Tracking Code'' or PTC. PTC is truly a ``kick code'' or symplectic integrator in the tradition of TRACYII, SixTrack, and TEAPOT. However it separates correctly the mathematical atlas of charts and the magnets at a structural level by implementing a ``restricted fibre bundle.'' The resulting structures allow backward propagation and recirculation, something not possible in standard tracking codes. Also PTC is polymorphic in handling real (single, double and even quadruple precision) and Taylor series. Therefore it has all the tools associated to the TPSA packages: Lie methods, Normal Forms, Cosy-Infinity capabilities, beam envelopes for radiation, etc., as well as parameter dependence on-the-fly. However PTC is an integrator, and as such, one must, generally, adhere to the Talman ``exactness'' view of modelling. Incidentally, it supports exact sector and rectangular bends as well. Of course, one can certainly bypass its integrator and the user is free to violate Talman's principles on his own; PTC provides the tools to dig one's grave but not the encouragement. The reader will find in Appendix B a PowerPoint presentation of FPP. The presentation is a bit out of date but it gives a good idea of FPP which is essential to PTC. FPP is a stand-alone library and can be used by anyone with a FORTRAN90 compiler. This presentation is also, to be honest, a place where the authors intend to document very incompletely nearly two years of work: the development of FPP and subsequently that of PTC. Our ultimate intention is to morph PTC completely into MAD-X. The code MAD-X is an upgrade of MAD-8 and not of the C++ CLASSIC based code MAD-9. The present document does not address when and how this will be done. It is also our goal to link, if possible, PTC with CAD programs for the design of complex follow-the-terrain beam lines. So far FPP and PTC have been used in the design of beam separators (complex polymorphs) and recirculators. They have also been linked with the code BMAD from Cornell. There is still a lot of work to be done if these tools are to be generally usable by a wide range of people. In addition, more complex structures will be needed to handle effects beyond single particle dynamics in a way which respects the fundamental mathematical integrity of the structures of PTC

    Modeling and analyses of thermal response tests in real and reduced-scale experiments for geothermal applications involving deep boreholes

    Get PDF
    This Ph.D. dissertation is aimed at developing models and defining innovative experimental strategies for performing and analyzing Thermal Response Tests (TRTs) for Ground Coupled Heat Pump (GCHP) applications. Three finite difference numerical models related to coaxial, single and double U Borehole Heat Exchangers (BHEs) have been developed starting from literature contributions and coupled with the Fast Fourier Transform (FFT) spectral method. The models have been implemented in three in-house Fortran90 codes that have been optimized to cope with variable longitudinal and radial mesh distribution for simulating the BHE configurations at given geothermal gradients, resembling both standard conditions and geothermal anomalies. The models have been extensively validated through the comparison of the numerical results with experimental measurements. Different ground properties and geothermal gradients along the ground depth can be handled by the models and set as initial and boundary conditions of the problem. The FFT method has been implemented in a dedicated Fortran90 code to exploit the advantage of handling different boundary conditions in terms of the heat transfer rate injected or extracted in a TRT without the need to perform the numerical simulation from scratch. The spectral analysis related to the FFT method has been also useful to highlight the importance of the numerical (that is also real) effect related to the geothermal gradient on simulated and real TRTs. The present Ph.D. study is aimed at the analysis of the BHE behavior in the early period, say for Fourier numbers typical of TRT measurements. The numerical results are addressed to the comprehension of the applicability of standard TRT analysis methods (essentially based on the Infinite Line Source model, ILS) when applied to shallow and deep BHEs (DBHEs) that may involve thermal conditions of "crossing temperatures" between ground and heat carrier fluid. The study has been carried out for single and multiple ground layers of equal thickness with different thermal conductivities along the depth. The heat transfer rate per unit length perfectly uniform with depth is the main hypothesis on which the ILS model is essentially based. On the other hand, the unavoidable variation of the distribution of the heat transfer rate per unit length along the borehole depth violates the assumption of uniform temperature at the borehole wall at each time. The developed models described in the present Ph.D. thesis take into account this aspect providing simulations closer to reality. Therefore these models and related simulation results can serve as useful numerical references for other models and approaches. The present Ph.D. study demonstrates also that the thermal conditions of "crossing temperatures" between ground and heat carrier fluid in BHE (especially for DBHE) are related to the “natural” heat rate made available by the geothermal gradient that in some cases can override the external heat input rate injected (or extracted) by the TRT machine. This affects the ground thermal conductivity estimations based on standard TRT methods. This effect is incorporated into the qratio parameter introduced by the present Ph.D. study and a specific dimensionless g-transfer function called g0. Both qratio and the g0 function incorporate the geothermal gradient. The qratio is expected to be relevant to future TRT guidelines at national and international levels. Error analyses on the BHE and ground properties estimations from the ILS model are reported in the present thesis. Besides the numerical work, the present Ph.D. thesis is aimed to present the experimental setup related to a suitable reduced-scale prototype of the real BHE and the surrounding ground for innovative TRT experiments. The scaled ground volume is realized with a slate block. The scaled heat exchanger, inserted into the slate block, is equipped with a central electrical heater along its entire depth and with temperature sensors at different radial distances and depths for the Electric Depth Distributed Thermal Response Test, EDDTRT. The measurements collected during the Ph.D. work highlight the possibility of performing reliable TRT experiments and estimating the grout/ground thermal conductivity by exploiting a central electric heater and cheap digital one-wire sensors distributed along the depth instead of the expensive optical fibers. It has to be specified that for the reduced scale experiment the digital one-wire sensors have been necessarily replaced by thermocouples. Measurement error analyses are reported in the thesis. The all-in-one BHE equipped with the central electrical heater and with temperature sensors for the EDDTRT assures continuous BHE performance monitoring, test for correct grouting, and test for aquifer presence. A Geothermal Heat Pump Portal and Online Designer for Ground Heat Exchanger Fields has been realized during the Ph.D. study (see https://en.geosensingdesign.org/). The present website offers the first worldwide ever (and completely Free) web calculation tool for the design of BHE fields based on a modified version of the Ashrae Method, also employed in the corresponding UNI Italian standard

    PROOST: object-oriented approach to multiphase reactive transport modeling in porous media

    Get PDF
    Reactive transport modeling involves solving several nonlinear coupled phenomena, among them, the flow of fluid phases, the transport of chemical species and energy, and chemical reactions. There are different ways to consider this coupling that might be more or less suitable depending on the nature of the problem to be solved. In this paper we acknowledge the importance of flexibility on reactive transport codes and how object-oriented programming can facilitate this feature. We present PROOST, an object-oriented code that allows solving reactive transport problems considering different coupling approaches. The code main classes and their interactions are presented. PROOST performance is illustrated by the resolution of a multiphase reactive transport problem where geochemistry affects hydrodynamic processes.Postprint (author's final draft

    Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control

    Get PDF
    The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability has lead to its adoption in many non-graphics applications, including wide variety of scientific computing fields. At the same time, a number of important dynamic optimal policy problems in economics are athirst of computing power to help overcome dual curses of complexity and dimensionality. We investigate if computational economics may benefit from new tools on a case study of imperfect information dynamic programming problem with learning and experimentation trade-off that is, a choice between controlling the policy target and learning system parameters. Specifically, we use a model of active learning and control of linear autoregression with unknown slope that appeared in a variety of macroeconomic policy and other contexts. The endogeneity of posterior beliefs makes the problem difficult in that the value function need not be convex and policy function need not be continuous. This complication makes the problem a suitable target for massively-parallel computation using graphics processors. Our findings are cautiously optimistic in that new tools let us easily achieve a factor of 15 performance gain relative to an implementation targeting single-core processors and thus establish a better reference point on the computational speed vs. coding complexity trade-off frontier. While further gains and wider applicability may lie behind steep learning barrier, we argue that the future of many computations belong to parallel algorithms anyway.Graphics Processing Units, CUDA programming, Dynamic programming, Learning, Experimentation

    Automated Evaluation of One-Loop Six-Point Processes for the LHC

    Get PDF
    In the very near future the first data from LHC will be available. The searches for the Higgs boson and for new physics will require precise predictions both for the signal and the background processes. Tree level calculations typically suffer from large renormalization scale uncertainties. I present an efficient implementation of an algorithm for the automated, Feynman diagram based calculation of one-loop corrections to processes with many external particles. This algorithm has been successfully applied to compute the virtual corrections of the process uuˉ→bbˉbbˉu\bar{u}\to b\bar{b}b\bar{b} in massless QCD and can easily be adapted for other processes which are required for the LHC.Comment: 232 pages, PhD thesi
    • 

    corecore