1,395 research outputs found

    On-the-fly adaptivity for nonlinear twoscale simulations using artificial neural networks and reduced order modeling

    Get PDF
    A multi-fidelity surrogate model for highly nonlinear multiscale problems is proposed. It is based on the introduction of two different surrogate models and an adaptive on-the-fly switching. The two concurrent surrogates are built incrementally starting from a moderate set of evaluations of the full order model. Therefore, a reduced order model (ROM) is generated. Using a hybrid ROM-preconditioned FE solver, additional effective stress-strain data is simulated while the number of samples is kept to a moderate level by using a dedicated and physics-guided sampling technique. Machine learning (ML) is subsequently used to build the second surrogate by means of artificial neural networks (ANN). Different ANN architectures are explored and the features used as inputs of the ANN are fine tuned in order to improve the overall quality of the ML model. Additional ANN surrogates for the stress errors are generated. Therefore, conservative design guidelines for error surrogates are presented by adapting the loss functions of the ANN training in pure regression or pure classification settings. The error surrogates can be used as quality indicators in order to adaptively select the appropriate -- i.e. efficient yet accurate -- surrogate. Two strategies for the on-the-fly switching are investigated and a practicable and robust algorithm is proposed that eliminates relevant technical difficulties attributed to model switching. The provided algorithms and ANN design guidelines can easily be adopted for different problem settings and, thereby, they enable generalization of the used machine learning techniques for a wide range of applications. The resulting hybrid surrogate is employed in challenging multilevel FE simulations for a three-phase composite with pseudo-plastic micro-constituents. Numerical examples highlight the performance of the proposed approach

    Massive MIMO Performance - TDD Versus FDD: What Do Measurements Say?

    Full text link
    Downlink beamforming in Massive MIMO either relies on uplink pilot measurements - exploiting reciprocity and TDD operation, or on the use of a predetermined grid of beams with user equipments reporting their preferred beams, mostly in FDD operation. Massive MIMO in its originally conceived form uses the first strategy, with uplink pilots, whereas there is currently significant commercial interest in the second, grid-of-beams. It has been analytically shown that in isotropic scattering (independent Rayleigh fading) the first approach outperforms the second. Nevertheless there remains controversy regarding their relative performance in practice. In this contribution, the performances of these two strategies are compared using measured channel data at 2.6 GHz.Comment: Submitted to IEEE Transactions on Wireless Communications, 31/Mar/201

    Non-Convex Rank/Sparsity Regularization and Local Minima

    Full text link
    This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with â„“1\ell_1 or nuclear norm relaxations. It is well known that this approach can be guaranteed to recover a near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it is also known to perform soft thresholding which results in a shrinking bias which can degrade the solution. In this paper we study an alternative non-convex regularization term. This formulation does not penalize elements that are larger than a certain threshold making it much less prone to small solutions. Our main theoretical results show that if a RIP holds then the stationary points are often well separated, in the sense that their differences must be of high cardinality/rank. Thus, with a suitable initial solution the approach is unlikely to fall into a bad local minima. Our numerical tests show that the approach is likely to converge to a better solution than standard â„“1\ell_1/nuclear-norm relaxation even when starting from trivial initializations. In many cases our results can also be used to verify global optimality of our method

    Strong convergence of a fully discrete finite element approximation of the stochastic Cahn-Hilliard equation

    Full text link
    We consider the stochastic Cahn-Hilliard equation driven by additive Gaussian noise in a convex domain with polygonal boundary in dimension d≤3d\le 3. We discretize the equation using a standard finite element method in space and a fully implicit backward Euler method in time. By proving optimal error estimates on subsets of the probability space with arbitrarily large probability and uniform-in-time moment bounds we show that the numerical solution converges strongly to the solution as the discretization parameters tend to zero.Comment: 25 page

    On the Incentives to Shift to Low-Carbon Freight Transport

    Get PDF
    The transport sector accounts for approximately 20% of EU-27 greenhouse gas (GHG) emissions, and 27% of U.S. GHG emissions. With the Kyoto Protocol, Sweden and several other nations have agreed to reduce these emissions. Often, solutions that involve consolidating freight and moving it to more carbon-efficient transport technologies are advocated as the most advantageous. For such initiatives the technology already exists, so change is only a matter of implementation. But when aggregate data is examined, very little change for the better is seen. This thesis explores why this may be the case, with the purpose being to increase the understanding of the incentives to shift to low-carbon freight transport. This is explored in a three-phase research structure where, first, macro-data is analyzed, after which theory is built using two multiple case studies, which serve as input to three mathematical modeling studies of different parts of the operator-service provider/forwarder-shipper chain of actors. By considering the chain of actors on the freight transport market as a service supply chain, the research in this thesis is able to use methods from, and make contributions to, the sustainable supply chain management literature as well as the literature on transport contracting. With this literature as the point of departure, the studies show that there is a matching problem associated with the implementation of low-carbon transports: with the currently used contracts it is usually not rational for the actors on the market to shift to low-carbon transports, even though the total cost on the market, on aggregate, may be reduced from shifting. Nevertheless, there are situations where shifting is rational for all actors. Creating such situations normally requires implementing long-term contracts. The models in this thesis show how such contracts can be designed. However, the models also show that situations where implementation is rational are very sensitive to changes in external parameters such as demand volatility, making implementation high risk in many cases. Another downside is that the environmental improvement is not always as large as one would expect due to inventory build-up and extra truck transports. For low-carbon transports to be implemented in large scale, their costs need to be more in line with conventional transports, and contracts that allocate risks and profits better need to be implemented. Not until these issues are better understood, and contracts and regulation implemented, can a large scale shift to low-carbon transports be expected

    Fast simulation of 3D elastic response for wheel–rail contact loading using Proper Generalized Decomposition

    Get PDF
    To increase computational efficiency, we adopt Proper Generalized Decomposition (PGD) to solve a reduced-order problem of the displacement field for a three-dimensional rail head exposed to different contact scenarios. The three-dimensional solid rail head is modeled as a two-dimensional cross-section, with the coordinate along the rail being treated as a parameter in the PGD approximation. A novel feature is that this allows us to solve the full three-dimensional model with a nearly two-dimensional computational effort. Additionally, we incorporate the distributed contact load predicted from dynamic vehicle-track simulations as extra coordinates in the PGD formulation, using a semi-Hertzian contact model. The problem is formulated in two ways; one general ansatz which considers the treatment of numerous parameters, some of which exhibit a linear influence, and a linear ansatz where multiple PGD solutions are solved for. In particular, situations where certain parameters become invariant are handled. We assess the accuracy and efficiency of the proposed strategy through a series of verification examples. It is shown that the PGD solution converges towards the FE solution with reduced computational cost. Furthermore, solving for the PGD approximation based on load parameterization in an offline stage allows expedient handling of the wheel-rail contact problem online

    Scaling up MIMO: Opportunities and Challenges with Very Large Arrays

    Full text link
    This paper surveys recent advances in the area of very large MIMO systems. With very large MIMO, we think of systems that use antenna arrays with an order of magnitude more elements than in systems being built today, say a hundred antennas or more. Very large MIMO entails an unprecedented number of antennas simultaneously serving a much smaller number of terminals. The disparity in number emerges as a desirable operating condition and a practical one as well. The number of terminals that can be simultaneously served is limited, not by the number of antennas, but rather by our inability to acquire channel-state information for an unlimited number of terminals. Larger numbers of terminals can always be accommodated by combining very large MIMO technology with conventional time- and frequency-division multiplexing via OFDM. Very large MIMO arrays is a new research field both in communication theory, propagation, and electronics and represents a paradigm shift in the way of thinking both with regards to theory, systems and implementation. The ultimate vision of very large MIMO systems is that the antenna array would consist of small active antenna units, plugged into an (optical) fieldbus.Comment: Accepted for publication in the IEEE Signal Processing Magazine, October 201

    A Projected Gradient Descent Method for CRF Inference allowing End-To-End Training of Arbitrary Pairwise Potentials

    Full text link
    Are we using the right potential functions in the Conditional Random Field models that are popular in the Vision community? Semantic segmentation and other pixel-level labelling tasks have made significant progress recently due to the deep learning paradigm. However, most state-of-the-art structured prediction methods also include a random field model with a hand-crafted Gaussian potential to model spatial priors, label consistencies and feature-based image conditioning. In this paper, we challenge this view by developing a new inference and learning framework which can learn pairwise CRF potentials restricted only by their dependence on the image pixel values and the size of the support. Both standard spatial and high-dimensional bilateral kernels are considered. Our framework is based on the observation that CRF inference can be achieved via projected gradient descent and consequently, can easily be integrated in deep neural networks to allow for end-to-end training. It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential. In addition, we compare our inference method to the commonly used mean-field algorithm. Our framework is evaluated on several public benchmarks for semantic segmentation with improved performance compared to previous state-of-the-art CNN+CRF models.Comment: Presented at EMMCVPR 2017 conferenc
    • …
    corecore