5,368 research outputs found
A photonic-crystal selective filter
A highly selective filter is designed, working at 1.55 μm and having a 3-dB bandwidth narrower than 0.4 nm, as is required in Dense Wavelength Division Multiplexed systems. Different solutions are proposed, involving photonic crystals made rectangular- or circular-section dielectric rods, or else of holes drilled in a dielectric bulk. The polarization and frequency selective properties are achieved by introducing a defect in the periodic structure. The device is studied by us- ing in-house codes implementing the full-wave Fourier Modal Method. Practical guidelines about advantages and limits of the investigated solutions are given
State Lotteries and Consumer Behavior
Despite considerable controversy surrounding the use of state lotteries as a means of public finance, little is known about their consumer consequences. This project investigates two central questions about lotteries. First, do state lotteries primarily crowd out other forms of gambling, or do they crowd out non-gambling consumption? Second, does consumer demand for lottery games respond to expected returns, as maximizing behavior predicts, or do consumers appear to be misinformed about the risks and returns of lottery gambles? Analyses of multiple sources of micro-level gambling data demonstrate that lottery spending does not substitute for other forms of gambling. Household consumption data suggest that household lottery gambling crowds out approximately $38 per month, or two percent, of other household consumption, with larger proportional reductions among low-income households. Demand for lottery products responds positively to the expected value of the gamble, controlling for other moments of the gamble and product characteristics; this suggests that consumers of lottery products are not simply uninformed, but are perhaps making fully-informed purchases.
Evaluating color texture descriptors under large variations of controlled lighting conditions
The recognition of color texture under varying lighting conditions is still
an open issue. Several features have been proposed for this purpose, ranging
from traditional statistical descriptors to features extracted with neural
networks. Still, it is not completely clear under what circumstances a feature
performs better than the others. In this paper we report an extensive
comparison of old and new texture features, with and without a color
normalization step, with a particular focus on how they are affected by small
and large variation in the lighting conditions. The evaluation is performed on
a new texture database including 68 samples of raw food acquired under 46
conditions that present single and combined variations of light color,
direction and intensity. The database allows to systematically investigate the
robustness of texture descriptors across a large range of variations of imaging
conditions.Comment: Submitted to the Journal of the Optical Society of America
Towards modular verification of pathways: fairness and assumptions
Modular verification is a technique used to face the state explosion problem
often encountered in the verification of properties of complex systems such as
concurrent interactive systems. The modular approach is based on the
observation that properties of interest often concern a rather small portion of
the system. As a consequence, reduced models can be constructed which
approximate the overall system behaviour thus allowing more efficient
verification.
Biochemical pathways can be seen as complex concurrent interactive systems.
Consequently, verification of their properties is often computationally very
expensive and could take advantage of the modular approach.
In this paper we report preliminary results on the development of a modular
verification framework for biochemical pathways. We view biochemical pathways
as concurrent systems of reactions competing for molecular resources. A modular
verification technique could be based on reduced models containing only
reactions involving molecular resources of interest.
For a proper description of the system behaviour we argue that it is
essential to consider a suitable notion of fairness, which is a
well-established notion in concurrency theory but novel in the field of pathway
modelling. We propose a modelling approach that includes fairness and we
identify the assumptions under which verification of properties can be done in
a modular way.
We prove the correctness of the approach and demonstrate it on the model of
the EGF receptor-induced MAP kinase cascade by Schoeberl et al.Comment: In Proceedings MeCBIC 2012, arXiv:1211.347
Color Constancy Using CNNs
In this work we describe a Convolutional Neural Network (CNN) to accurately
predict the scene illumination. Taking image patches as input, the CNN works in
the spatial domain without using hand-crafted features that are employed by
most previous methods. The network consists of one convolutional layer with max
pooling, one fully connected layer and three output nodes. Within the network
structure, feature learning and regression are integrated into one optimization
process, which leads to a more effective model for estimating scene
illumination. This approach achieves state-of-the-art performance on a standard
dataset of RAW images. Preliminary experiments on images with spatially varying
illumination demonstrate the stability of the local illuminant estimation
ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR
2015 workshop
Automated Pruning for Deep Neural Network Compression
In this work we present a method to improve the pruning step of the current
state-of-the-art methodology to compress neural networks. The novelty of the
proposed pruning technique is in its differentiability, which allows pruning to
be performed during the backpropagation phase of the network training. This
enables an end-to-end learning and strongly reduces the training time. The
technique is based on a family of differentiable pruning functions and a new
regularizer specifically designed to enforce pruning. The experimental results
show that the joint optimization of both the thresholds and the network weights
permits to reach a higher compression rate, reducing the number of weights of
the pruned network by a further 14% to 33% compared to the current
state-of-the-art. Furthermore, we believe that this is the first study where
the generalization capabilities in transfer learning tasks of the features
extracted by a pruned network are analyzed. To achieve this goal, we show that
the representations learned using the proposed pruning methodology maintain the
same effectiveness and generality of those learned by the corresponding
non-compressed network on a set of different recognition tasks.Comment: 8 pages, 5 figures. Published as a conference paper at ICPR 201
- …
