1,001 research outputs found

    DCTM: Discrete-Continuous Transformation Matching for Semantic Flow

    Full text link
    Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks

    The Astrophysical Multipurpose Software Environment

    Get PDF
    We present the open source Astrophysical Multi-purpose Software Environment (AMUSE, www.amusecode.org), a component library for performing astrophysical simulations involving different physical domains and scales. It couples existing codes within a Python framework based on a communication layer using MPI. The interfaces are standardized for each domain and their implementation based on MPI guarantees that the whole framework is well-suited for distributed computation. It includes facilities for unit handling and data storage. Currently it includes codes for gravitational dynamics, stellar evolution, hydrodynamics and radiative transfer. Within each domain the interfaces to the codes are as similar as possible. We describe the design and implementation of AMUSE, as well as the main components and community codes currently supported and we discuss the code interactions facilitated by the framework. Additionally, we demonstrate how AMUSE can be used to resolve complex astrophysical problems by presenting example applications.Comment: 23 pages, 25 figures, accepted for A&

    Hierarchical bounding structures for efficient virial computations: Towards a realistic molecular description of cholesterics

    Full text link
    We detail the application of bounding volume hierarchies to accelerate second-virial evaluations for arbitrary complex particles interacting through hard and soft finite-range potentials. This procedure, based on the construction of neighbour lists through the combined use of recursive atom-decomposition techniques and binary overlap search schemes, is shown to scale sub-logarithmically with particle resolution in the case of molecular systems with high aspect ratios. Its implementation within an efficient numerical and theoretical framework based on classical density functional theory enables us to investigate the cholesteric self-assembly of a wide range of experimentally-relevant particle models. We illustrate the method through the determination of the cholesteric behaviour of hard, structurally-resolved twisted cuboids, and report quantitative evidence of the long-predicted phase handedness inversion with increasing particle thread angles near the phenomenological threshold value of 45∘45^\circ. Our results further highlight the complex relationship between microscopic structure and helical twisting power in such model systems, which may be attributed to subtle geometric variations of their chiral excluded-volume manifold

    Monolithic multiphysics simulation of hypersonic aerothermoelasticity using a hybridized discontinuous Galerkin method

    Get PDF
    This work presents implementation of a hybridized discontinuous Galerkin (DG) method for robust simulation of the hypersonic aerothermoelastic multiphysics system. Simulation of hypersonic vehicles requires accurate resolution of complex multiphysics interactions including the effects of high-speed turbulent flow, extreme heating, and vehicle deformation due to considerable pressure loads and thermal stresses. However, the state-of-the-art procedures for hypersonic aerothermoelasticity are comprised of low-fidelity approaches and partitioned coupling schemes. These approaches preclude robust design and analysis of hypersonic vehicles for a number of reasons. First, low-fidelity approaches limit their application to simple geometries and lack the ability to capture small scale flow features (e.g. turbulence, shocks, and boundary layers) which greatly degrades modeling robustness and solution accuracy. Second, partitioned coupling approaches can introduce considerable temporal and spatial inaccuracies which are not trivially remedied. In light of these barriers, we propose development of a monolithically-coupled hybridized DG approach to enable robust design and analysis of hypersonic vehicles with arbitrary geometries. Monolithic coupling methods implement a coupled multiphysics system as a single, or monolithic, equation system to be resolved by a single simulation approach. Further, monolithic approaches are free from the physical inaccuracies and instabilities imposed by partitioned approaches and enable time-accurate evolution of the coupled physics system. In this work, a DG method is considered due to its ability to accurately resolve second-order partial differential equations (PDEs) of all classes. We note that the hypersonic aerothermoelastic system is composed of PDEs of all three classes. Hybridized DG methods are specifically considered due to their exceptional computational efficiency compared to traditional DG methods. It is expected that our monolithic hybridized DG implementation of the hypersonic aerothermoelastic system will 1) provide the physical accuracy necessary to capture complex physical features, 2) be free from any spatial and temporal inaccuracies or instabilities inherent to partitioned coupling procedures, 3) represent a transition to high-fidelity simulation methods for hypersonic aerothermoelasticity, and 4) enable efficient analysis of hypersonic aerothermoelastic effects on arbitrary geometries

    VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations

    Full text link
    Recent advancements in implicit neural representations have contributed to high-fidelity surface reconstruction and photorealistic novel view synthesis. However, the computational complexity inherent in these methodologies presents a substantial impediment, constraining the attainable frame rates and resolutions in practical applications. In response to this predicament, we propose VQ-NeRF, an effective and efficient pipeline for enhancing implicit neural representations via vector quantization. The essence of our method involves reducing the sampling space of NeRF to a lower resolution and subsequently reinstating it to the original size utilizing a pre-trained VAE decoder, thereby effectively mitigating the sampling time bottleneck encountered during rendering. Although the codebook furnishes representative features, reconstructing fine texture details of the scene remains challenging due to high compression rates. To overcome this constraint, we design an innovative multi-scale NeRF sampling scheme that concurrently optimizes the NeRF model at both compressed and original scales to enhance the network's ability to preserve fine details. Furthermore, we incorporate a semantic loss function to improve the geometric fidelity and semantic coherence of our 3D reconstructions. Extensive experiments demonstrate the effectiveness of our model in achieving the optimal trade-off between rendering quality and efficiency. Evaluation on the DTU, BlendMVS, and H3DS datasets confirms the superior performance of our approach.Comment: Submitted to the 38th Annual AAAI Conference on Artificial Intelligenc

    Transfer learning-based physics-informed convolutional neural network for simulating flow in porous media with time-varying controls

    Full text link
    A physics-informed convolutional neural network is proposed to simulate two phase flow in porous media with time-varying well controls. While most of PICNNs in existing literatures worked on parameter-to-state mapping, our proposed network parameterizes the solution with time-varying controls to establish a control-to-state regression. Firstly, finite volume scheme is adopted to discretize flow equations and formulate loss function that respects mass conservation laws. Neumann boundary conditions are seamlessly incorporated into the semi-discretized equations so no additional loss term is needed. The network architecture comprises two parallel U-Net structures, with network inputs being well controls and outputs being the system states. To capture the time-dependent relationship between inputs and outputs, the network is well designed to mimic discretized state space equations. We train the network progressively for every timestep, enabling it to simultaneously predict oil pressure and water saturation at each timestep. After training the network for one timestep, we leverage transfer learning techniques to expedite the training process for subsequent timestep. The proposed model is used to simulate oil-water porous flow scenarios with varying reservoir gridblocks and aspects including computation efficiency and accuracy are compared against corresponding numerical approaches. The results underscore the potential of PICNN in effectively simulating systems with numerous grid blocks, as computation time does not scale with model dimensionality. We assess the temporal error using 10 different testing controls with variation in magnitude and another 10 with higher alternation frequency with proposed control-to-state architecture. Our observations suggest the need for a more robust and reliable model when dealing with controls that exhibit significant variations in magnitude or frequency

    A hierarchical finite element Monte Carlo method for stochastic two-scale elliptic equations

    Get PDF
    We consider two-scale elliptic equations whose coefficients are random. In particular, we study two cases: in the first case, the coefficients are obtained from an ergodic dynamical system acting on a probability space, and in the second the case, the coefficients are periodic in the microscale but are random. We suppose that the coefficients also depend on the macroscopic slow variables. While the effective coefficient of the ergodic homogenization problem is deterministic, to approximate it, it is necessary to solve cell equations in a large but finite size “truncated" cube and compute an approximated effective coefficient from the solution of this equation. This approximated effective coefficient is, however, realization dependent; and the deterministic effective coefficient of the homogenization problem can be approximated by taking its expectation. In the periodic random setting, the effective coefficient for each realization are obtained from the solutions of cell equations which are posed in the unit cube, but to compute its average by the Monte Carlo method, we need to consider many uncorrelated realizations to accurately approximate the average. Straightforward employment of finite element approximation and the Monte Carlo method to compute this expectation with the same level of finite element resolution and the same number of Monte Carlo samples at every macroscopic point is prohibitively expensive. We develop a hierarchical finite element Monte Carlo algorithm to approximate the effective coefficients at a dense hierarchical network of macroscopic points. The method requires an optimal level of complexity that is essentially equal to that for computing the effective coefficient at one macroscopic point, and achieves essentially the same accuracy. The levels of accuracy for solving cell problems and for the Monte Carlo sampling are chosen according to the level in the hierarchy that the macroscopic points belong to. Solutions and the effective coefficients at the points where the cell problems are solved with higher accuracy and the effective coefficients are approximated with a larger number of Monte Carlo samples are employed as correctors for the effective coefficient at those points at which the cell problems are solved with lower accuracy and fewer Monte Carlo samples. The method combines the hierarchical finite element method for solving cell problems at a dense network of macroscopic points with the optimal complexity developed in D. L. Brown, Y. Efendiev and V. H. Hoang, Multiscale Model. Simul. 11 (2013), with a hierarchical Monte Carlo sampling algorithm that uses different number of samples at different macroscopic points depending on the level in the hierarchy that the macroscopic points belong to. Proof of concept numerical examples confirm the theoretical results
    • …
    corecore