237 research outputs found

    Numerical analysis of temperature stratification in the CIRCE pool facility

    Get PDF
    In the framework of Heavy Liquid Metal (HLM) GEN IV Nuclear reactor development, the focus is in the combination of security and performance. Numerical simulations with Computational Fluid Dynamics (CFD) or system codes are useful tools to predict the main steady-state phenomena and how transitional accidents could unfold in GEN IV reactors. In this paper, to support the validation of CFD as a valid tool for the design, the capability of ANSYS CFX v15.0 to simulate and reproduce mixed natural convection and thermal stratification phenomena inside a pool is investigated. The 3D numerical model is based on the CIRCE facility, located in C.R. ENEA Brasimone. It is a pool facility, structured with all the components necessary to simulate the behavior of an HLM reactor, where LBE flows into the primary circuit. For the analysis, the LBE physical properties are implemented in CFX by using recent NEA equations [2]. Previously published RELAP5-3D© results [1] are employed to derive accurate boundary conditions for the simulation of the steady-state conditions in the pool and for CFX validation. The analysis focuses on the pool natural circulation with the presence of thermal structures in contact with LBE, considered as constant temperature sources. The development of thermal stratification in the pool is observed and evaluated with a mesh sensitivity analysis

    Learning to Zoom and Unzoom

    Full text link
    Many perception systems in mobile computing, autonomous navigation, and AR/VR face strict compute constraints that are particularly challenging for high-resolution input images. Previous works propose nonuniform downsamplers that "learn to zoom" on salient image regions, reducing compute while retaining task-relevant image information. However, for tasks with spatial labels (such as 2D/3D object detection and semantic segmentation), such distortions may harm performance. In this work (LZU), we "learn to zoom" in on the input image, compute spatial features, and then "unzoom" to revert any deformations. To enable efficient and differentiable unzooming, we approximate the zooming warp with a piecewise bilinear mapping that is invertible. LZU can be applied to any task with 2D spatial input and any model with 2D spatial features, and we demonstrate this versatility by evaluating on a variety of tasks and datasets: object detection on Argoverse-HD, semantic segmentation on Cityscapes, and monocular 3D object detection on nuScenes. Interestingly, we observe boosts in performance even when high-resolution sensor data is unavailable, implying that LZU can be used to "learn to upsample" as well.Comment: CVPR 2023. Code and additional visuals available at https://tchittesh.github.io/lzu

    A test of statistical hadronization with exclusive rates

    Get PDF
    We studied the statistical hadronization model in its full microcanonical formulation enforcing the maximal set of conservation laws: energy-momentum, angular momentum, parity, C-parity, isospin and abelian charges. The microcanonical weight (proportional to the probability) of an asymptotic channel, has been calculated in a field theory framework in order to account for relativistic effects due to the finite cluster's volume. A purposely devised Monte-Carlo method which allows to calculate efficiently the channel weight has been set up. A preliminary comparison of the model with measured exclusive rates in low energy ( √ s = 2.1 GeV and 2.4 GeV) e+e− collisions and with branching ratios of some heavy resonance is shown

    Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images

    Full text link
    Self-driving vehicles rely on urban street maps for autonomous navigation. In this paper, we introduce Pix2Map, a method for inferring urban street map topology directly from ego-view images, as needed to continually update and expand existing maps. This is a challenging task, as we need to infer a complex urban road topology directly from raw image data. The main insight of this paper is that this problem can be posed as cross-modal retrieval by learning a joint, cross-modal embedding space for images and existing maps, represented as discrete graphs that encode the topological layout of the visual surroundings. We conduct our experimental evaluation using the Argoverse dataset and show that it is indeed possible to accurately retrieve street maps corresponding to both seen and unseen roads solely from image data. Moreover, we show that our retrieved maps can be used to update or expand existing maps and even show proof-of-concept results for visual localization and image retrieval from spatial graphs.Comment: 12 pages, 8 figure

    Advances in Focused Ion Beam Tomography for Three-Dimensional Characterization in Materials Science

    Get PDF
    Over the years, FIB-SEM tomography has become an extremely important technique for the three-dimensional reconstruction of microscopic structures with nanometric resolution. This paper describes in detail the steps required to perform this analysis, from the experimental setup to the data analysis and final reconstruction. To demonstrate the versatility of the technique, a comprehensive list of applications is also summarized, ranging from batteries to shale rocks and even some types of soft materials. Moreover, the continuous technological development, such as the introduction of the latest models of plasma and cryo-FIB, can open the way towards the analysis with this technique of a large class of soft materials, while the introduction of new machine learning and deep learning systems will not only improve the resolution and the quality of the final data, but also expand the degree of automation and efficiency in the dataset handling. These future developments, combined with a technique that is already reliable and widely used in various fields of research, are certain to become a routine tool in electron microscopy and material characterization

    Graphene-based nanomaterials for tissue engineering in the dental field

    Get PDF
    The world of dentistry is approaching graphene-based nanomaterials as substitutes for tissue engineering. Apart from its exceptional mechanical strength, electrical conductivity and thermal stability, graphene and its derivatives can be functionalized with several bioactive molecules. They can also be incorporated into different scaffolds used in regenerative dentistry, generating nanocomposites with improved characteristics. This review presents the state of the art of graphene-based nanomaterial applications in the dental field. We first discuss the interactions between cells and graphene, summarizing the available in vitro and in vivo studies concerning graphene biocompatibility and cytotoxicity. We then highlight the role of graphene-based nanomaterials in stem cell control, in terms of adhesion, proliferation and differentiation. Particular attention will be given to stem cells of dental origin, such as those isolated from dental pulp, periodontal ligament or dental follicle. The review then discusses the interactions between graphene-based nanomaterials with cells of the immune system; we also focus on the antibacterial activity of graphene nanomaterials. In the last section, we offer our perspectives on the various opportunities facing the use of graphene and its derivatives in associations with titanium dental implants, membranes for bone regeneration, resins, cements and adhesives as well as for tooth-whitening procedure

    Fast Neural Scene Flow

    Full text link
    Scene flow is an important problem as it provides low-level motion cues for many downstream tasks. State-of-the-art learning methods are usually fast and can achieve impressive performance on in-domain data, but usually fail to generalize to out-of-the-distribution (OOD) data or handle dense point clouds. In this paper, we focus on a runtime optimization-based neural scene flow pipeline. In (a) one can see its application in the densification of lidar. However, in (c) one sees that the major drawback is the extensive computation time. We identify that the common speedup strategy in network architectures for coordinate networks has little effect on scene flow acceleration [see green (b)] unlike image reconstruction [see pink (b)]. With the dominant computational burden stemming instead from the Chamfer loss function, we propose to use a distance transform-based loss function to accelerate [see purple (b)], which achieves up to 30x speedup and on-par estimation performance compared to NSFP [see (c)]. When tested on 8k points, it is as efficient [see (c)] as leading learning methods, achieving real-time performance.Comment: 17 pages, 10 figures, 6 table
    corecore