690 research outputs found

    3D and 4D Models Used in Bridge Design and Education

    Get PDF
    A bridge is the type of structure whose appearance normally deserves more attention because it not only has an evident impact on the environment but also represents considerable investment, both of which justify careful evaluation. The aesthetic analysis is an important issue that must be considered when designing a new bridge, especially when it is to be built in an urban or road environment. In this context, the automatic generation of three‐dimensional (3D) geometric models of the bridge under analysis, and the walk around and aerial simulation allowed over it, which can be generated, helps bridge designers to evaluate its aesthetic concept and environmental impact. The bridge construction process can also be simulated, helping designers and builders to review the progress of the construction work in situ. For that, 4D (3D + time) models of the most frequent bridge construction methods were generated, using virtual reality (VR) technology. The simulation of the construction activity made possible by the developed interactive 4D model helps bridge designers to analyse the whole construction process. The present study aims to analyse the mechanisms of how to generate 3D models of a bridge automatically and how to simulate its construction using VR capacities

    Expert judgment in climate science: How it is used and how it can be justified.

    Get PDF
    Like any science marked by high uncertainty, climate science is characterized by a widespread use of expert judgment. In this paper, we first show that, in climate science, expert judgment is used to overcome uncertainty, thus playing a crucial role in the domain and even at times supplanting models. One is left to wonder to what extent it is legitimate to assign expert judgment such a status as an epistemic superiority in the climate context, especially as the production of expert judgment is particularly opaque. To begin answering this question, we highlight the key components of expert judgment. We then argue that the justification for the status and use of expert judgment depends on the competence and the individual subjective features of the expert producing the judgment since expert judgment involves not only the expert's theoretical knowledge and tacit knowledge, but also their intuition and values. This goes against the objective ideal in science and the criteria from social epistemology which largely attempt to remove subjectivity from expertise

    Stochastic Problems in the Simulation of Labor Supply

    Get PDF
    Modern work in labor supply attempts to account for nonlinear budget sets created by government tax and transfer programs. Progressive taxation leads to nonlinear convex budget sets while the earned income credit, social security contributions, AFDC, and the proposed NIT plans all lead to nonlinear, nonconvex budget sets. Where nonlinear budget sets occur, the expected value of the random variable, labor supply, can no longer be calculated by simply 'plugging in' the estimated coefficients. Properties of the stochastic terms which arise from the residual or from a stochastic preference structure need to be accounted for. This paper considers both analytical approaches and Monte Carlo approaches to the problem. We attempt to find accurate and low cost computational techniques which would permit extensive use of simulation methodology. Large samples are typically included in such simulations which makes computational techniques an important consideration. But these large samples may also lead to simplifications in computational techniques because of the averaging process used in calculation of simulation results. This paper investigates the tradeoffs available between computational accuracy and cost in simulation exercises over large samples.

    Hybrid approach based on particle swarm optimization for electricity markets participation

    Get PDF
    In many large-scale and time-consuming problems, the application of metaheuristics becomes essential, since these methods enable achieving very close solutions to the exact one in a much shorter time. In this work, we address the problem of portfolio optimization applied to electricity markets negotiation. As in a market environment, decision-making is carried out in very short times, the application of the metaheuristics is necessary. This work proposes a Hybrid model, combining a simplified exact resolution of the method, as a means to obtain the initial solution for a Particle Swarm Optimization (PSO) approach. Results show that the presented approach is able to obtain better results in the metaheuristic search process.This work has received funding from the European Union's Horizon 2020 research and innovation programme under project DOMINOES (grant agreement No 771066) and from FEDER Funds through COMPETE program and from National Funds through FCT under the project UID/EEA/00760/2019 and Ricardo Faia is supported by FCT Funds through and SFRH/BD/133086/2017 PhD scholarship.info:eu-repo/semantics/publishedVersio

    Pre-optimizing variational quantum eigensolvers with tensor networks

    Full text link
    The variational quantum eigensolver (VQE) is a promising algorithm for demonstrating quantum advantage in the noisy intermediate-scale quantum (NISQ) era. However, optimizing VQE from random initial starting parameters is challenging due to a variety of issues including barren plateaus, optimization in the presence of noise, and slow convergence. While simulating quantum circuits classically is generically difficult, classical computing methods have been developed extensively, and powerful tools now exist to approximately simulate quantum circuits. This opens up various strategies that limit the amount of optimization that needs to be performed on quantum hardware. Here we present and benchmark an approach where we find good starting parameters for parameterized quantum circuits by classically simulating VQE by approximating the parameterized quantum circuit (PQC) as a matrix product state (MPS) with a limited bond dimension. Calling this approach the variational tensor network eigensolver (VTNE), we apply it to the 1D and 2D Fermi-Hubbard model with system sizes that use up to 32 qubits. We find that in 1D, VTNE can find parameters for PQC whose energy error is within 0.5% relative to the ground state. In 2D, the parameters that VTNE finds have significantly lower energy than their starting configurations, and we show that starting VQE from these parameters requires non-trivially fewer operations to come down to a given energy. The higher the bond dimension we use in VTNE, the less work needs to be done in VQE. By generating classically optimized parameters as the initialization for the quantum circuit one can alleviate many of the challenges that plague VQE on quantum computers.Comment: 10 page

    Sensitivity of aerosol concentrations and cloud properties to nucleation and secondary organic distribution in ECHAM5-HAM global circulation model

    Get PDF
    The global aerosol-climate model ECHAM5-HAM was modified to improve the representation of new particle formation in the boundary layer. Activation-type nucleation mechanism was introduced to produce observed nucleation rates in the lower troposphere. A simple and computationally efficient model for biogenic secondary organic aerosol (BSOA) formation was implemented. Here we study the sensitivity of the aerosol and cloud droplet number concentrations (CDNC) to these additions. Activation-type nucleation significantly increases aerosol number concentrations in the boundary layer. Increased particle number concentrations have a significant effect also on cloud droplet number concentrations and therefore on cloud properties. We performed calculations with activation nucleation coefficient values of 2 x 10(-7) s(-1), 2 x 10(-6) s(-1) and 2 x 10(-5) s(-1) to evaluate the sensitivity to this parameter. For BSOA we have used yields of 0.025, 0.07 and 0.15 to estimate the amount of monoterpene oxidation products available for condensation. The hybrid BSOA formation scheme induces large regional changes to size distribution of organic carbon, and therefore affects particle optical properties and cloud droplet number concentrations locally. Although activation-type nucleation improves modeled aerosol number concentrations in the boundary layer, the use of a global activation coefficient generally leads to overestimation of aerosol number. Overestimation can also arise from underestimation of primary emissions.The global aerosol-climate model ECHAM5-HAM was modified to improve the representation of new particle formation in the boundary layer. Activation-type nucleation mechanism was introduced to produce observed nucleation rates in the lower troposphere. A simple and computationally efficient model for biogenic secondary organic aerosol (BSOA) formation was implemented. Here we study the sensitivity of the aerosol and cloud droplet number concentrations (CDNC) to these additions. Activation-type nucleation significantly increases aerosol number concentrations in the boundary layer. Increased particle number concentrations have a significant effect also on cloud droplet number concentrations and therefore on cloud properties. We performed calculations with activation nucleation coefficient values of 2 x 10(-7) s(-1), 2 x 10(-6) s(-1) and 2 x 10(-5) s(-1) to evaluate the sensitivity to this parameter. For BSOA we have used yields of 0.025, 0.07 and 0.15 to estimate the amount of monoterpene oxidation products available for condensation. The hybrid BSOA formation scheme induces large regional changes to size distribution of organic carbon, and therefore affects particle optical properties and cloud droplet number concentrations locally. Although activation-type nucleation improves modeled aerosol number concentrations in the boundary layer, the use of a global activation coefficient generally leads to overestimation of aerosol number. Overestimation can also arise from underestimation of primary emissions.The global aerosol-climate model ECHAM5-HAM was modified to improve the representation of new particle formation in the boundary layer. Activation-type nucleation mechanism was introduced to produce observed nucleation rates in the lower troposphere. A simple and computationally efficient model for biogenic secondary organic aerosol (BSOA) formation was implemented. Here we study the sensitivity of the aerosol and cloud droplet number concentrations (CDNC) to these additions. Activation-type nucleation significantly increases aerosol number concentrations in the boundary layer. Increased particle number concentrations have a significant effect also on cloud droplet number concentrations and therefore on cloud properties. We performed calculations with activation nucleation coefficient values of 2 x 10(-7) s(-1), 2 x 10(-6) s(-1) and 2 x 10(-5) s(-1) to evaluate the sensitivity to this parameter. For BSOA we have used yields of 0.025, 0.07 and 0.15 to estimate the amount of monoterpene oxidation products available for condensation. The hybrid BSOA formation scheme induces large regional changes to size distribution of organic carbon, and therefore affects particle optical properties and cloud droplet number concentrations locally. Although activation-type nucleation improves modeled aerosol number concentrations in the boundary layer, the use of a global activation coefficient generally leads to overestimation of aerosol number. Overestimation can also arise from underestimation of primary emissions.Peer reviewe

    Sensitivity of aerosol concentrations and cloud properties to nucleation and secondary organic distribution in ECHAM5-HAM global circulation model

    Get PDF
    The global aerosol-climate model ECHAM5-HAM was modified to improve the representation of new particle formation in the boundary layer. Activation-type nucleation mechanism was introduced to produce observed nucleation rates in the lower troposphere. A simple and computationally efficient model for biogenic secondary organic aerosol (BSOA) formation was implemented. Here we study the sensitivity of the aerosol and cloud droplet number concentrations (CDNC) to these additions. Activation-type nucleation significantly increases aerosol number concentrations in the boundary layer. Increased particle number concentrations have a significant effect also on cloud droplet number concentrations and therefore on cloud properties. We performed calculations with activation nucleation coefficient values of 2×10⁻⁷s⁻¹, 2×10⁻⁶s⁻¹ and 2×10⁻⁵s⁻¹ to evaluate the sensitivity to this parameter. For BSOA we have used yields of 0.025, 0.07 and 0.15 to estimate the amount of monoterpene oxidation products available for condensation. The hybrid BSOA formation scheme induces large regional changes to size distribution of organic carbon, and therefore affects particle optical properties and cloud droplet number concentrations locally. Although activation-type nucleation improves modeled aerosol number concentrations in the boundary layer, the use of a global activation coefficient generally leads to overestimation of aerosol number. Overestimation can also arise from underestimation of primary emissions

    Kernel methods with mixed data types and their applications

    Get PDF
    Support Vector Machines (SVMs) represent a category of supervised machine learning algorithms that find extensive application in both classification and regression tasks. In these algorithms, kernel functions are responsible for measuring the similarity between input samples to generate models and perform predictions. In order for SVMs to tackle data analysis tasks involving mixed data, the implementation of a valid kernel function for this purpose is required. However, in the current literature, we hardly find any kernel function specifically designed to measure similarity between mixed data. In addition, there is a complete lack of significant examples where these kernels have been practically implemented. Another notable characteristic of SVMs is their remarkable efficacy in addressing high-dimensional problems. However, they can become inefficient when dealing with large volumes of data. In this project, we propose the formulation of a kernel function capable of accurately capturing the similarity between samples of mixed data. We also present an SVM algorithm based on Bagging techniques that enables efficient analysis of large volumes of data. Additionally, we implement both proposals in an updated version of the successful SVM library LIBSVM. Moreover, we evaluate their effectiveness, robustness and efficiency, obtaining promising results
    corecore