196,813 research outputs found
Recommended from our members
Network-constrained models of liberalized electricity markets: the devil is in the details
Numerical models for electricity markets are frequently used to inform and support decisions. How robust are the results? Three research groups used the same, realistic data set for generators, demand and transmission network as input for their numerical models. The results coincide when predicting competitive market results. In the strategic case in which large generators can exercise market power, the predicted prices differed significantly. The results are highly sensitive to assumptions about market design, timing of the market and assumptions about constraints on the rationality of generators. Given the same assumptions the results coincide. We provide a checklist for users to understand the implications of different modelling assumptions
Minimal mechanisms for vegetation patterns in semiarid regions
The minimal ecological requirements for formation of regular vegetation
patterns in semiarid systems have been recently questioned. Against the general
belief that a combination of facilitative and competitive interactions is
necessary, recent theoretical studies suggest that, under broad conditions,
nonlocal competition among plants alone may induce patterns. In this paper, we
review results along this line, presenting a series of models that yield
spatial patterns when finite-range competition is the only driving force. A
preliminary derivation of this type of model from a more detailed one that
considers water-biomass dynamics is also presented. Keywords: Vegetation
patterns, nonlocal interactionsComment: 8 pages, 4 figure
Monotonic regression based on Bayesian P-splines: an application to estimating price response functions from store-level scanner data
Generalized additive models have become a widely used instrument for flexible regression analysis. In many practical situations, however, it is desirable to restrict the flexibility of nonparametric estimation in order to accommodate a presumed monotonic relationship between a covariate and the response variable. For example, consumers usually will buy less of a brand if its price increases, and therefore one expects a brand's unit sales to be a decreasing function in own price. We follow a Bayesian approach using penalized B-splines and incorporate the assumption of monotonicity in a natural way by an appropriate specification of the respective prior distributions. We illustrate the methodology in an empirical application modeling demand for a brand of orange juice and show that imposing monotonicity constraints for own- and cross-item price effects improves the predictive validity of the estimated sales response function considerably
Towards Effective Codebookless Model for Image Classification
The bag-of-features (BoF) model for image classification has been thoroughly
studied over the last decade. Different from the widely used BoF methods which
modeled images with a pre-trained codebook, the alternative codebook free image
modeling method, which we call Codebookless Model (CLM), attracted little
attention. In this paper, we present an effective CLM that represents an image
with a single Gaussian for classification. By embedding Gaussian manifold into
a vector space, we show that the simple incorporation of our CLM into a linear
classifier achieves very competitive accuracy compared with state-of-the-art
BoF methods (e.g., Fisher Vector). Since our CLM lies in a high dimensional
Riemannian manifold, we further propose a joint learning method of low-rank
transformation with support vector machine (SVM) classifier on the Gaussian
manifold, in order to reduce computational and storage cost. To study and
alleviate the side effect of background clutter on our CLM, we also present a
simple yet effective partial background removal method based on saliency
detection. Experiments are extensively conducted on eight widely used databases
to demonstrate the effectiveness and efficiency of our CLM method
Steered mixture-of-experts for light field images and video : representation and coding
Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution
Modeling Tiered Pricing in the Internet Transit Market
ISPs are increasingly selling "tiered" contracts, which offer Internet
connectivity to wholesale customers in bundles, at rates based on the cost of
the links that the traffic in the bundle is traversing. Although providers have
already begun to implement and deploy tiered pricing contracts, little is known
about how such pricing affects ISPs and their customers. While contracts that
sell connectivity on finer granularities improve market efficiency, they are
also more costly for ISPs to implement and more difficult for customers to
understand. In this work we present two contributions: (1) we develop a novel
way of mapping traffic and topology data to a demand and cost model; and (2) we
fit this model on three large real-world networks: an European transit ISP, a
content distribution network, and an academic research network, and run
counterfactuals to evaluate the effects of different pricing strategies on both
the ISP profit and the consumer surplus. We highlight three core findings.
First, ISPs gain most of the profits with only three or four pricing tiers and
likely have little incentive to increase granularity of pricing even further.
Second, we show that consumer surplus follows closely, if not precisely, the
increases in ISP profit with more pricing tiers. Finally, the common ISP
practice of structuring tiered contracts according to the cost of carrying the
traffic flows (e.g., offering a discount for traffic that is local) can be
suboptimal and that dividing contracts based on both traffic demand and the
cost of carrying it into only three or four tiers yields near-optimal profit
for the ISP
- …