275 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Assessment of hyper-reduction techniques in the context of CFD-based intrusive reduced order modeling

    Get PDF
    The aircraft design and optimization process relies on an extensive numbers of computations for a wide range of parameters defining flight conditions, mass cases or shape variations. As the deployment of high-fidelity methods like computational fluid dynamics (CFD) is still too expensive for such multi-query scenarios, reduced order models (ROMs) are a popular approach to reduce the computational costs while retaining sufficient accuracy levels. ROMs are usually based on a low-dimensional representation of the full order model (FOM), that is utilized to map a set of input parameters to an approximative solution of the FOM. This thesis investigates a physics-based ROM that seeks an optimized representation of the FOM in a proper orthogonal decomposition (POD) reduced space by minimizing the steady residual obtained from a CFD solver. The so called least squares ROM (LSQ-ROM) is extended by a consistent hyperreduction, which aims for a reduction of the entries of the residual vector that is minimized during the prediction. In particular, hyperreduction has been proposed by former studies on LSQ-ROM in order to decouple the algorithm computational complexity from the problem size. However, limitations within the DLR’s CFD solver TAU prevent the consistent application of the hyperreduction. The CFD solver "CFD for ONERA, DLR and AIRBUS" (CODA), which is currently under development, allows the implementation and investigation of a consistent hyperreduction that effectively removes the dependency on the original problem size. The main goal of this thesis is the implementation and the performance assessment of a consistent hyperreduction for the LSQ-ROM that is coupled with the solver CODA and based on a reduced CFD mesh. The reduced mesh is identified by a set of hyperreduction indices selected by the discrete empirical interpolation method (DEIM) and missing point estimation (MPE) and allows a direct reduction of the effort for the residual evaluation in the CFD solver CODA. For an assessment of the performance of the hyperreduction with respect to accuracy and prediction time, the consistent hyperreduction implementation is applied to the steady flow prediction of two 2D test cases in the subsonic and transonic regime and one 3D test case in the transonic regime. It can be shown that the implemented hyperreduction effectively reduces the time for the predictions while causing only minor accuracy deterioration. The results highlight that the new hyperreduction is superior to the former implementation of the hyperreduction. In particular, for high reduction levels, the consistent hyperreduction becomes significantly faster than the former one with speed-up factors of around 5 for the 2D test cases and up to 25 for the 3D test case

    Efficient Calculation of Distance Transform on Discrete Global Grid Systems and Its Application in Automatic Soil Sampling Site Selection

    Get PDF
    Geospatial data analysis often requires the computing of a distance transform (DT) for a given vector feature. For instance, in wildfire management, it is helpful to find the distance of all points in an area from the wildfire’s boundary. Computing a distance transform on traditional Geographic Information Systems (GIS) is usually adopted from image processing methods, albeit prone to distortion resulting from flat maps. Discrete Global Grid Systems (DGGS) are relatively new low-distortion globe-based GIS that discretize the Earth into highly regular cells using multiresolution grids. In this thesis, we introduce an efficient DT algorithm for DGGS. Our novel algorithm heavily exploits the hierarchy of a DGGS and its mathematical properties and applies to many different DGGSs. We evaluate our method by comparing its distortion with the DT methods used in traditional GIS and its speed with the application of general 3D mesh DT algorithms on the DGGS grid. We demonstrate that our method is efficient and has lower distortion. To evaluate our DT algorithm further, we have used a real-world case study of selecting soil test points within agricultural fields. Multiple criteria including the distance of soil test points to different features should be considered to select representative points in a field. We show that DT can help to automate the process of selecting test points, by allowing us to efficiently calculate objectives for a representative test point. DT also allows for efficient calculation of buffers from certain features such as farm headlands and underground pipelines, to avoid certain regions when selecting the test points

    Fast Neural Scene Flow

    Full text link
    Scene flow is an important problem as it provides low-level motion cues for many downstream tasks. State-of-the-art learning methods are usually fast and can achieve impressive performance on in-domain data, but usually fail to generalize to out-of-the-distribution (OOD) data or handle dense point clouds. In this paper, we focus on a runtime optimization-based neural scene flow pipeline. In (a) one can see its application in the densification of lidar. However, in (c) one sees that the major drawback is the extensive computation time. We identify that the common speedup strategy in network architectures for coordinate networks has little effect on scene flow acceleration [see green (b)] unlike image reconstruction [see pink (b)]. With the dominant computational burden stemming instead from the Chamfer loss function, we propose to use a distance transform-based loss function to accelerate [see purple (b)], which achieves up to 30x speedup and on-par estimation performance compared to NSFP [see (c)]. When tested on 8k points, it is as efficient [see (c)] as leading learning methods, achieving real-time performance.Comment: 17 pages, 10 figures, 6 table

    A skeletonization algorithm for gradient-based optimization

    Full text link
    The skeleton of a digital image is a compact representation of its topology, geometry, and scale. It has utility in many computer vision applications, such as image description, segmentation, and registration. However, skeletonization has only seen limited use in contemporary deep learning solutions. Most existing skeletonization algorithms are not differentiable, making it impossible to integrate them with gradient-based optimization. Compatible algorithms based on morphological operations and neural networks have been proposed, but their results often deviate from the geometry and topology of the true medial axis. This work introduces the first three-dimensional skeletonization algorithm that is both compatible with gradient-based optimization and preserves an object's topology. Our method is exclusively based on matrix additions and multiplications, convolutional operations, basic non-linear functions, and sampling from a uniform probability distribution, allowing it to be easily implemented in any major deep learning library. In benchmarking experiments, we prove the advantages of our skeletonization algorithm compared to non-differentiable, morphological, and neural-network-based baselines. Finally, we demonstrate the utility of our algorithm by integrating it with two medical image processing applications that use gradient-based optimization: deep-learning-based blood vessel segmentation, and multimodal registration of the mandible in computed tomography and magnetic resonance images.Comment: Accepted at ICCV 202

    Computer-assisted detection of lung cancer nudules in medical chest X-rays

    Get PDF
    Diagnostic medicine was revolutionized in 1895 with Rontgen's discovery of x-rays. X-ray photography has played a very prominent role in diagnostics of all kinds since then and continues to do so. It is true that more sophisticated and successful medical imaging systems are available. These include Magnetic Resonance Imaging (MRI), Computerized Tomography (CT) and Positron Emission Tomography (PET). However, the hardware instalment and operation costs of these systems remain considerably higher than x-ray systems. Conventional x-ray photography also has the advantage of producing an image in significantly less time than MRI, CT and PET. X-ray photography is still used extensively, especially in third world countries. The routine diagnostic tool for chest complaints is the x-ray. Lung cancer may be diagnosed by the identification of a lung cancer nodule in a chest x-ray. The cure of lung cancer depends upon detection and diagnosis at an early stage. Presently the five-year survival rate of lung cancer patients is approximately 10%. If lung cancer can be detected when the tumour is still small and localized, the five-year survival rate increases to about 40%. However, currently only 20% of lung cancer cases are diagnosed at this early stage. Giger et al wrote that "detection and diagnosis of cancerous lung nodules in chest radiographs are among the most important and difficult tasks performed by radiologists"

    Design-Informed Generative Modelling using Structural Optimization

    Full text link
    Although various structural optimization techniques have a sound mathematical basis, the practical constructability of optimal designs poses a great challenge in the manufacturing stage. Currently, there is only a limited number of unified frameworks which output ready-to-manufacture parametric Computer-Aided Designs (CAD) of the optimal designs. From a generative design perspective, it is essential to have a single platform that outputs a structurally optimized CAD model because CAD models are an integral part of most industrial product development and manufacturing stages. This study focuses on developing a novel unified workflow handling topology, layout and size optimization in a single parametric platform, which subsequently outputs a ready-to-manufacture CAD model. All such outputs are checked and validated for structural requirements; strength, stiffness and stability in accordance with standard codes of practice. In the proposed method, first, topology-optimal model is generated and converted to a one-pixel-wide chain model using skeletonization. Secondly, a spatial frame is extracted from the skeleton for its member size and layout optimization. Finally, the CAD model is generated using constructive solid geometry trees and the structural integrity of each member is assessed to ensure structural robustness prior to manufacturing. Various examples presented in the paper showcase the validity of the proposed method across various engineering disciplines
    • …
    corecore