400,191 research outputs found

    Transformer-based Multimodal Change Detection with Multitask Consistency Constraints

    Full text link
    Change detection plays a fundamental role in Earth observation for analyzing temporal iterations over time. However, recent studies have largely neglected the utilization of multimodal data that presents significant practical and technical advantages compared to single-modal approaches. This research focuses on leveraging digital surface model (DSM) data and aerial images captured at different times for detecting change beyond 2D. We observe that the current change detection methods struggle with the multitask conflicts between semantic and height change detection tasks. To address this challenge, we propose an efficient Transformer-based network that learns shared representation between cross-dimensional inputs through cross-attention. It adopts a consistency constraint to establish the multimodal relationship, which involves obtaining pseudo change through height change thresholding and minimizing the difference between semantic and pseudo change within their overlapping regions. A DSM-to-image multimodal dataset encompassing three cities in the Netherlands was constructed. It lays a new foundation for beyond-2D change detection from cross-dimensional inputs. Compared to five state-of-the-art change detection methods, our model demonstrates consistent multitask superiority in terms of semantic and height change detection. Furthermore, the consistency strategy can be seamlessly adapted to the other methods, yielding promising improvements

    Compensation and Firm Performance

    Get PDF
    This paper uses stochastic simulation and my U.S. econometric model to examine the optimal choice of monetary policy instruments. Are the variances, covariances, and parameters in the model such as to favor one instrument over the other, in particular the interest rate over the money supply? The results show that the interest rate and the money supply are about equally good as policy instruments in terms of minimizing the variance of real GNP. The variances of some of the components of GNP are, however, much larger when the money supply is the policy instrument, as is the variance of the change in stock prices. Therefore, if one's loss function is expanded beyond simply the variance of real GNP to variances of other variables, the interest rate policy does better. The results thus provide some support for what seems to be the Fed's current choice of using the interest rate as its primary instrument. Stochastic simulation is also used to estimate how much of the variance of real GNP is due to the error terms in the demand for money equations. The results show that the contribution is not very great even when the money supply is the policy instrument.

    Optimal Choice of Monetary Policy Instruments in a Macroeconometric Model

    Get PDF
    This paper uses stochastic simulation and my U.S. econometric model to examine the optimal choice of monetary policy instruments. Are the variances, covariances, and parameters in the model such as to favor one instrument over the other, in particular the interest rate over the money supply? The results show that the interest rate and the money supply are about equally good as policy instruments in terms of minimizing the variance of real GNP. The variances of some of the components of GNP are, however, much larger when the money supply is the policy instrument, as is the variance of the change in stock prices. Therefore, if one's loss function is expanded beyond simply the variance of real GNP to variances of other variables, the interest rate policy does better. The results thus provide some support for what seems to be the Fed's current choice of using the interest rate as its primary instrument. Stochastic simulation is also used to estimate how much of the variance of real GNP is due to the error terms in the demand for money equations. The results show that the contribution is not very great even when the money supply is the policy instrument .

    How to Knit Your Own Markov Blanket

    Get PDF
    Hohwy (Hohwy 2016, Hohwy 2017) argues there is a tension between the free energy principle and leading depictions of mind as embodied, enactive, and extended (so-called ‘EEE1 cognition’). The tension is traced to the importance, in free energy formulations, of a conception of mind and agency that depends upon the presence of a ‘Markov blanket’ demarcating the agent from the surrounding world. In what follows I show that the Markov blanket considerations do not, in fact, lead to the kinds of tension that Hohwy depicts. On the contrary, they actively favour the EEE story. This is because the Markov property, as exemplified in biological agents, picks out neither a unique nor a stationary boundary. It is this multiplicity and mutability– rather than the absence of agent-environment boundaries as such - that EEE cognition celebrates

    Coupling the reduced-order model and the generative model for an importance sampling estimator

    Full text link
    In this work, we develop an importance sampling estimator by coupling the reduced-order model and the generative model in a problem setting of uncertainty quantification. The target is to estimate the probability that the quantity of interest (QoI) in a complex system is beyond a given threshold. To avoid the prohibitive cost of sampling a large scale system, the reduced-order model is usually considered for a trade-off between efficiency and accuracy. However, the Monte Carlo estimator given by the reduced-order model is biased due to the error from dimension reduction. To correct the bias, we still need to sample the fine model. An effective technique to reduce the variance reduction is importance sampling, where we employ the generative model to estimate the distribution of the data from the reduced-order model and use it for the change of measure in the importance sampling estimator. To compensate the approximation errors of the reduced-order model, more data that induce a slightly smaller QoI than the threshold need to be included into the training set. Although the amount of these data can be controlled by a posterior error estimate, redundant data, which may outnumber the effective data, will be kept due to the epistemic uncertainty. To deal with this issue, we introduce a weighted empirical distribution to process the data from the reduced-order model. The generative model is then trained by minimizing the cross entropy between it and the weighted empirical distribution. We also introduce a penalty term into the objective function to deal with the overfitting for more robustness. Numerical results are presented to demonstrate the effectiveness of the proposed methodology

    A “geopolítica infraestrutural” do conhecimento climático:o Modelo Brasileiro do Sistema Terrestre e a divisão Norte-Sul do conhecimento

    Get PDF
    This article examines how geopolitics are embedded into the efforts of Southern nations that try to build new climate knowledge infrastructures. It achieves this through an analysis of the composition of the international climate modelling basis of the Intergovernmental Panel on Climate Change (IPCC), viewed from the perspective of the Brazilian Earth System Model (BESM) – the scientific project which placed a Latin American country for the first time inside the global modelling bases of the IPCC. The paper argues that beyond the idea of “infrastructural globalism”, a historical process of global scientific cooperation led by developed countries, we also need to understand the “infrastructural geopolitics” of climate models. This concept seeks to describe the actions of developing countries towards minimizing the imbalance of global climate scientific production, and how these countries participate in global climate governance and politics. The analysis of the construction of BESM suggests that national investments in global climate modelling were aimed at attaining scientific sovereignty, which is closely related to a notion of political sovereignty of the nation-state within the international regime of climate change

    Non-parametric mass reconstruction of A1689 from strong lensing data with SLAP

    Full text link
    We present the mass distribution in the central area of the cluster A1689 by fitting over 100 multiply lensed images with the non-parametric Strong Lensing Analysis Package (SLAP, Diego et al. 2004). The surface mass distribution is obtained in a robust way finding a total mass of 0.25E15 M_sun/h within a 70'' circle radius from the central peak. Our reconstructed density profile fits well an NFW profile with small perturbations due to substructure and is compatible with the more model dependent analysis of Broadhurst et al. (2004a) based on the same data. Our estimated mass does not rely on any prior information about the distribution of dark matter in the cluster. The peak of the mass distribution falls very close to the central cD and there is substructure near the center suggesting that the cluster is not fully relaxed. We also examine the effect on the recovered mass when we include the uncertainties in the redshift of the sources and in the original shape of the sources. Using simulations designed to mimic the data, we identify some biases in our reconstructed mass distribution. We find that the recovered mass is biased toward lower masses beyond 1 arcmin (150 kpc) from the central cD and that in the very center we may be affected by degeneracy problems. On the other hand, we confirm that the reconstructed mass between 25'' and 70'' is a robust, unbiased estimate of the true mass distribution and is compatible with an NFW profile.Comment: 11 pages, 12 figures. MNRAS submitted. A full resolution of the paper can be found in http://darwin.physics.upenn.edu/SLAP

    Density-density functionals and effective potentials in many-body electronic structure calculations

    Full text link
    We demonstrate the existence of different density-density functionals designed to retain selected properties of the many-body ground state in a non-interacting solution starting from the standard density functional theory ground state. We focus on diffusion quantum Monte Carlo applications that require trial wave functions with optimal Fermion nodes. The theory is extensible and can be used to understand current practices in several electronic structure methods within a generalized density functional framework. The theory justifies and stimulates the search of optimal empirical density functionals and effective potentials for accurate calculations of the properties of real materials, but also cautions on the limits of their applicability. The concepts are tested and validated with a near-analytic model.Comment: five figure
    • 

    corecore