13,544 research outputs found
Mapping the results of local statistics
The application of geographically weighted regression (GWR) – a local spatial statistical technique used to test for spatial nonstationarity – has grown rapidly in the social, health and demographic sciences. GWR is a useful exploratory analytical tool that generates a set of location-specific parameter estimates which can be mapped and analysed to provide information on spatial nonstationarity in relationships between predictors and the outcome variable. A major challenge to GWR users, however, is how best to map these parameter estimates. This paper introduces a simple mapping technique that combines local parameter estimates and local t-values on one map. The resultant map can facilitate the exploration and interpretation of nonstationarity.geographically weighted regression, local statistics, mapping, nonstationarity
Recommended from our members
Endogenous Correlation
We model endogenous correlation in asset returns via the role of heterogeneous expectations in investor types, and the dynamic impact of imitative learning by investors. Learning is driven by relative performance. In addition, we allow a cautious slow learning pace to reflect institutional conditions. Imitative learning shapes the market ecology that influences price formation. Using the model of non-imitative agents as a benchmark, our results show that the dynamics of imitative learning endogenously induce a significant degree of asset dependency and patterns of non-constant correlation. The asymmetric learning effect on correlation, however, implies a self-reinforcing process, where a bearish condition amplifies the effect that further exacerbates asset dependency. We conclude that imitative learning, even when rational, can to a certain extent account for the phenomena of market crashes. Our results have implications for transparency in regulation issues
An ADMM Algorithm for a Class of Total Variation Regularized Estimation Problems
We present an alternating augmented Lagrangian method for convex optimization
problems where the cost function is the sum of two terms, one that is separable
in the variable blocks, and a second that is separable in the difference
between consecutive variable blocks. Examples of such problems include Fused
Lasso estimation, total variation denoising, and multi-period portfolio
optimization with transaction costs. In each iteration of our method, the first
step involves separately optimizing over each variable block, which can be
carried out in parallel. The second step is not separable in the variables, but
can be carried out very efficiently. We apply the algorithm to segmentation of
data based on changes inmean (l_1 mean filtering) or changes in variance (l_1
variance filtering). In a numerical example, we show that our implementation is
around 10000 times faster compared with the generic optimization solver SDPT3
- …