336,960 research outputs found

    Multinomial Logit Models with Implicit Variable Selection

    Get PDF
    Multinomial logit models which are most commonly used for the modeling of unordered multi-category responses are typically restricted to the use of few predictors. In the high-dimensional case maximum likelihood estimates frequently do not exist. In this paper we are developing a boosting technique called multinomBoost that performs variable selection and fits the multinomial logit model also when predictors are high-dimensional. Since in multicategory models the effect of one predictor variable is represented by several parameters one has to distinguish between variable selection and parameter selection. A special feature of the approach is that, in contrast to existing approaches, it selects variables not parameters. The method can distinguish between mandatory predictors and optional predictors. Moreover, it adapts to metric, binary, nominal and ordinal predictors. Regularization within the algorithm allows to include nominal and ordinal variables which have many categories. In the case of ordinal predictors the order information is used. The performance of the boosting technique with respect to mean squared error, prediction error and the identification of relevant variables is investigated in a simulation study. For two real life data sets the results are also compared with the Lasso approach which selects parameters

    A 4-Dimensional Markov model for the evaluation of radio access technology selection strategies in multiservice scenarios

    Get PDF
    In order to support the conceptual development of Common Radio Resource Management (CRRM) algorithms, this paper provides an analytical approach to the performance evaluation of Radio Access Technology (RAT) selection procedures in a multi-RAT/multiservice environment. In particular, a 4-Dimensional (4D) Markovian model is devised so as to consider the allocation of voice and data services in a GERAN/UTRAN system. Through the analytical definition of well-established Key Performance Indicators (KPIs) we provide numerical results on the evaluation of a load balancing RAT allocation policy.Peer ReviewedPostprint (published version

    Mixed H2/H∞ control for infinite dimensional systems

    Get PDF
    The class of infinite dimensional systems often occurs when dealing with distributed parameter models consisting of partial differential equations. Although forming a comprehensive description, they mainly become manageable by finite dimensional approximations which likely neglect important effects, but underlies a certain structure. In contrast to common techniques for controlling infinite dimensional systems, this work focuses on using robust control methods. Thus, the uncertainty structure that occurs due to the discretization shall be taken into account particularly. Additionally, optimal performance measures can be included into the design process. The mixed H2/H∞ control approach handles the inclusion of disturbances and inaccuracies while guaranteeing specified energy or magnitude bounds. In order to include various of these system requirements, multi-objective robust control techniques based on the linear matrix inequality framework are utilized. This offers great flexibility concerning the formulation of the control task and results in convex optimization problems which can be solved numerically efficient by semi-definite programming. A flexible robot arm structure serves as the major application example during this work. The model discretization leads to an LTI system of specified order with an uncertainty model which is obtained by considering the concrete approximation impact and frequency domain tests. A structural analysis of the system model relates the neglected dynamics to a robust characterization. For the objective selection, stability shall be ensured under all expected circumstances while the aspects of optimal H2 performance, passive behavior and optimal measurement output selection are included. The undesirable spillover effect is thoroughly investigated and thus avoided.Tesi

    A Hybrid Fish – Bee Optimization Algorithm for Heart Disease Prediction using Multiple Kernel SVM Classifier

    Get PDF
    International audienceThe patient's heart disease status is obtained by using a heart disease detection model. That is used for the medical experts. In order to predict the heart disease, the existing technique use optimal classifier. Even though the existing technique achieved the better result, it has some disadvantages. In order to improve those drawbacks, the suggested technique utilizes the effective method for heart disease prediction. At first the input information is preprocessed and then the preprocessed result is forwarded to the feature selection process. For the feature selection process a proficient feature selection is used over the high dimensional medical data. Hybrid Fish Bee optimization algorithm (HFSBEE) is utilized. Thus, the proposed algorithm parallelizes the two algorithms such that the local behavior of artificial bee colony algorithm and global search of fish swarm optimization are effectively used to find the optimal solution. Classification process is performed by the transformation of medical dataset to the Multi kernel support vector machine (MKSVM). The process of our proposed technique is calculated based on the accuracy, sensitivity, specificity, precision, recall and F-measure. Here, for test analysis, the some datasets used i.e. Cleveland, Hungarian and Switzerland etc., that are given based on the UCI machine learning repository. The experimental outcome show that our presented technique is went better than the accuracy of 97.68%. This is for the Cleveland dataset when related with existing hybrid kernel support vector machine (HKSVM) method achieved 96.03% and optimal rough fuzzy classifier obtained 62.25%. The implementation of the proposed method is done by MATLAB platform. Rundown phrases-Artificial bee colony algorithm, Fish swarm optimization, Multi kernel support vector machine, Optimal rough fuzzy, Cleveland, Hungarian and Switzerland

    Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Get PDF
    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12 reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design

    CELL: a Python package for cluster expansion with a focus on complex alloys

    Full text link
    We present the Python package CELL, which provides a modular approach to the cluster expansion (CE) method. CELL can treat a wide variety of substitutional systems, including one-, two-, and three-dimensional alloys, in a general multi-component and multi-sublattice framework. It is capable of dealing with complex materials comprising several atoms in their parent lattice. CELL uses state-of-the-art techniques for the construction of training data sets, model selection, and finite-temperature simulations. The user interface consists of well-documented Python classes and modules (http://sol.physik.hu-berlin.de/cell/). CELL also provides visualization utilities and can be interfaced with virtually any ab initio package, total-energy codes based on interatomic potentials, and more. The usage and capabilities of CELL are illustrated by a number of examples, comprising a Cu-Pt surface alloy with oxygen adsorption, featuring two coupled binary sublattices, and the thermodynamic analysis of its order-disorder transition; the demixing transition and lattice-constant bowing of the Si-Ge alloy; and an iterative CE approach for a complex clathrate compound with a parent lattice consisting of 54 atoms
    corecore