758,837 research outputs found

    Deep Elastic Networks with Model Selection for Multi-Task Learning

    Full text link
    In this work, we consider the problem of instance-wise dynamic network model selection for multi-task learning. To this end, we propose an efficient approach to exploit a compact but accurate model in a backbone architecture for each instance of all tasks. The proposed method consists of an estimator and a selector. The estimator is based on a backbone architecture and structured hierarchically. It can produce multiple different network models of different configurations in a hierarchical structure. The selector chooses a model dynamically from a pool of candidate models given an input instance. The selector is a relatively small-size network consisting of a few layers, which estimates a probability distribution over the candidate models when an input instance of a task is given. Both estimator and selector are jointly trained in a unified learning framework in conjunction with a sampling-based learning strategy, without additional computation steps. We demonstrate the proposed approach for several image classification tasks compared to existing approaches performing model selection or learning multiple tasks. Experimental results show that our approach gives not only outstanding performance compared to other competitors but also the versatility to perform instance-wise model selection for multiple tasks.Comment: ICCV 201

    A Hybrid Method for E-Process Selection

    Get PDF
    A number of e-Processes (i.e. software processes for developing e-Commerce information systems) are available in industry. It is difficult to select the best suited e-Process for a case at hand. At the same time this selection is important because functionality and quality of any system under development will depend on the instantiated software process. The knowledge required for the selection task cannot be easily realized. That task can be considered as an instance of multi attribute decision making and several of the attributes to consider are likely to conflict with each other. An efficient and effective approach is needed to selecting software processes for developing e-commerce systems. In this paper we propose such an approach. It is hybrid as it rests on case-based reasoning, multi attribute decision making, and social choice methods. To demonstrate how our approach works we briefly discuss a case study

    Proximity measures based on KKT points for constrained multi-objective optimization

    Get PDF
    An important aspect of optimization algorithms, for instance evolutionary algorithms, are termination criteria that measure the proximity of the found solution to the optimal solution set. A frequently used approach is the numerical verification of necessary optimality conditions such as the Karush-Kuhn-Tucker (KKT) conditions. In this paper, we present a proximity measure which characterizes the violation of the KKT conditions. It can be computed easily and is continuous in every efficient solution. Hence, it can be used as an indicator for the proximity of a certain point to the set of efficient (Edgeworth-Pareto-minimal) solutions and is well suited for algorithmic use due to its continuity properties. This is especially useful within evolutionary algorithms for candidate selection and termination, which we also illustrate numerically for some test problems

    DBBRBF- Convalesce optimization for software defect prediction problem using hybrid distribution base balance instance selection and radial basis Function classifier

    Full text link
    Software is becoming an indigenous part of human life with the rapid development of software engineering, demands the software to be most reliable. The reliability check can be done by efficient software testing methods using historical software prediction data for development of a quality software system. Machine Learning plays a vital role in optimizing the prediction of defect-prone modules in real life software for its effectiveness. The software defect prediction data has class imbalance problem with a low ratio of defective class to non-defective class, urges an efficient machine learning classification technique which otherwise degrades the performance of the classification. To alleviate this problem, this paper introduces a novel hybrid instance-based classification by combining distribution base balance based instance selection and radial basis function neural network classifier model (DBBRBF) to obtain the best prediction in comparison to the existing research. Class imbalanced data sets of NASA, Promise and Softlab were used for the experimental analysis. The experimental results in terms of Accuracy, F-measure, AUC, Recall, Precision, and Balance show the effectiveness of the proposed approach. Finally, Statistical significance tests are carried out to understand the suitability of the proposed model.Comment: 32 pages, 24 Tables, 8 Figures

    A Multi-Engine Approach to Answer Set Programming

    Full text link
    Answer Set Programming (ASP) is a truly-declarative programming paradigm proposed in the area of non-monotonic reasoning and logic programming, that has been recently employed in many applications. The development of efficient ASP systems is, thus, crucial. Having in mind the task of improving the solving methods for ASP, there are two usual ways to reach this goal: (i)(i) extending state-of-the-art techniques and ASP solvers, or (ii)(ii) designing a new ASP solver from scratch. An alternative to these trends is to build on top of state-of-the-art solvers, and to apply machine learning techniques for choosing automatically the "best" available solver on a per-instance basis. In this paper we pursue this latter direction. We first define a set of cheap-to-compute syntactic features that characterize several aspects of ASP programs. Then, we apply classification methods that, given the features of the instances in a {\sl training} set and the solvers' performance on these instances, inductively learn algorithm selection strategies to be applied to a {\sl test} set. We report the results of a number of experiments considering solvers and different training and test sets of instances taken from the ones submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows that, by applying machine learning techniques to ASP solving, it is possible to obtain very robust performance: our approach can solve more instances compared with any solver that entered the 3rd ASP Competition. (To appear in Theory and Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure

    A comparison of two methods for prediction of response and rates of inbreeding in selected populations with the results obtained in two selection experiments

    Get PDF
    Selection programmes are mainly concerned with increasing genetic gain. However, short-term progress should not be obtained at the expense of the within-population genetic variability. Different prediction models for the evolution within a small population of the genetic mean of a selected trait, its genetic variance and its inbreeding have been developed but have mainly been validated through Monte Carlo simulation studies. The purpose of this study was to compare theoretical predictions to experimental results. Two deterministic methods were considered, both grounded on a polygenic additive model. Differences between theoretical predictions and experimental results arise from differences between the true and the assumed genetic model, and from mathematical simplifications applied in the prediction methods. Two sets of experimental lines of chickens were used in this study: the Dutch lines undergoing true truncation mass selection, the other lines (French) undergoing mass selection with a restriction on the representation of the different families. This study confirmed, on an experimental basis, that modelling is an efficient approach to make useful predictions of the evolution of selected populations although the basic assumptions considered in the models (polygenic additive model, normality of the distribution, base population at the equilibrium, etc.) are not met in reality. The two deterministic methods compared yielded results that were close to those observed in real data, especially when the selection scheme followed the rules of strict mass selection: for instance, both predictions overestimated the genetic gain in the French experiment, whereas both predictions were close to the observed values in the Dutch experiment
    corecore