69,107 research outputs found
Feature-Based Diversity Optimization for Problem Instance Classification
Understanding the behaviour of heuristic search methods is a challenge. This
even holds for simple local search methods such as 2-OPT for the Traveling
Salesperson problem. In this paper, we present a general framework that is able
to construct a diverse set of instances that are hard or easy for a given
search heuristic. Such a diverse set is obtained by using an evolutionary
algorithm for constructing hard or easy instances that are diverse with respect
to different features of the underlying problem. Examining the constructed
instance sets, we show that many combinations of two or three features give a
good classification of the TSP instances in terms of whether they are hard to
be solved by 2-OPT.Comment: 20 pages, 18 figure
Measuring Effectiveness of Quantitative Equity Portfolio Management Methods
In this paper, I use quantitative computer models to measure the effectiveness of Quantitative Equity Portfolio Management in predicting future stock returns using commonly accepted industry valuation factors. Industry knowledge and practices are first examined in order to determine strengths and weaknesses, as well as to build a foundation for the modeling. In order to assess the accuracy of the model and its inherent concepts, I employ up to ten years of historical data for a sample of stocks. The analysis examines the historical data to determine if there is any correlation between returns and the valuation factors. Results suggest that the price to cash flow and price to EBITDA exhibited significant predictors of future returns, while the price to earnings ratio is an insignificant predictor
Employing Classifying Terms for Testing Model Transformations
This contribution proposes a new technique for developing test cases for UML and OCL models. The technique is based on an approach that automatically constructs object
models for class models enriched by OCL constraints. By guiding the construction process through so-called classifying terms, the built test cases in form of object models are classified into equivalence classes. A classifying term can be an arbitrary OCL term on the class model that calculates for an object model a characteristic value. From each equivalence class of object models with identical characteristic values one representative is chosen. The constructed test cases behave significantly different with regard to the selected classifying term. By building few diverse object models, properties of the UML and OCL model can be explored effectively. The technique is applied for automatically constructing relevant source model test cases for model transformations between a source and target metamodel.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
Causal sites as quantum geometry
We propose a structure called a causal site to use as a setting for quantum
geometry, replacing the underlying point set. The structure has an interesting
categorical form, and a natural "tangent 2-bundle," analogous to the tangent
bundle of a smooth manifold. Examples with reasonable finiteness conditions
have an intrinsic geometry, which can approximate classical solutions to
general relativity. We propose an approach to quantization of causal sites as
well.Comment: 21 pages, 3 eps figures; v2: added references; to appear in JM
Best Sources Forward: Domain Generalization through Source-Specific Nets
A long standing problem in visual object categorization is the ability of algorithms to generalize across different testing conditions. The problem has been formalized as a covariate shift among the probability distributions generating the training data (source) and the test data (target) and several domain adaptation methods have been proposed to address this issue. While these approaches have considered the single source-single target scenario, it is plausible to have multiple sources and require adaptation to any possible target domain. This last scenario, named Domain Generalization (DG), is the focus of our work. Differently from previous DG methods which learn domain invariant representations from source data, we design a deep network with multiple domain-specific classifiers, each associated to a source domain. At test time we estimate the probabilities that a target sample belongs to each source domain and exploit them to optimally fuse the classifiers predictions. To further improve the generalization ability of our model, we also introduced a domain agnostic component supporting the final classifier. Experiments on two public benchmarks demonstrate the power of our approach
- …