67 research outputs found

    Generic Model Refactorings

    Get PDF
    Many modeling languages share some common concepts and principles. For example, Java, MOF, and UML share some aspects of the concepts\ud of classes, methods, attributes, and inheritance. However, model\ud transformations such as refactorings specified for a given language\ud cannot be readily reused for another language because their related\ud metamodels may be structurally different. Our aim is to enable a\ud flexible reuse of model transformations across various metamodels.\ud Thus, in this paper, we present an approach allowing the specification\ud of generic model transformations, in particular refactorings, so\ud that they can be applied to different metamodels. Our approach relies\ud on two mechanisms: (1) an adaptation based mainly on the weaving\ud of aspects; (2) the notion of model typing, an extension of object\ud typing in the model-oriented context. We validated our approach by\ud performing some experiments that consisted of specifying three well\ud known refactorings (Encapsulate Field, Move Method, and Pull Up Method)\ud and applying each of them onto three different metamodels (Java,\ud MOF, and UML)

    Experimental progress in positronium laser physics

    Get PDF

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    • 

    corecore