2,757 research outputs found

    Using linear programming for weights identification of generalized bonferroni means in R

    Full text link
    The generalized Bonferroni mean is able to capture some interaction effects between variables and model mandatory requirements. We present a number of weights identification algorithms we have developed in the R programming language in order to model data using the generalized Bonferroni mean subject to various preferences. We then compare its accuracy when fitting to the journal ranks dataset

    CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features

    Full text link
    In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods

    Identification of gene pathways implicated in Alzheimer's disease using longitudinal imaging phenotypes with sparse regression

    Get PDF
    We present a new method for the detection of gene pathways associated with a multivariate quantitative trait, and use it to identify causal pathways associated with an imaging endophenotype characteristic of longitudinal structural change in the brains of patients with Alzheimer's disease (AD). Our method, known as pathways sparse reduced-rank regression (PsRRR), uses group lasso penalised regression to jointly model the effects of genome-wide single nucleotide polymorphisms (SNPs), grouped into functional pathways using prior knowledge of gene-gene interactions. Pathways are ranked in order of importance using a resampling strategy that exploits finite sample variability. Our application study uses whole genome scans and MR images from 464 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. 66,182 SNPs are mapped to 185 gene pathways from the KEGG pathways database. Voxel-wise imaging signatures characteristic of AD are obtained by analysing 3D patterns of structural change at 6, 12 and 24 months relative to baseline. High-ranking, AD endophenotype-associated pathways in our study include those describing chemokine, Jak-stat and insulin signalling pathways, and tight junction interactions. All of these have been previously implicated in AD biology. In a secondary analysis, we investigate SNPs and genes that may be driving pathway selection, and identify a number of previously validated AD genes including CR1, APOE and TOMM40

    Use of multivariate statistical methods for the analysis of metabolomic data

    Full text link
    [ES] En las últimas décadas los avances tecnológicos han tenido como consecuencia la generación de una creciente cantidad de datos en el campo de la biología y la biomedicina. A día de hoy, las así llamadas tecnologías "ómicas", como la genómica, epigenómica, transcriptómica o metabolómica entre otras, producen bases de datos con cientos, miles o incluso millones de variables. El análisis de datos ómicos presenta una serie de complejidades tanto metodoló-gicas como computacionales que han llevado a una revolución en el desarrollo de nuevos métodos estadísticos específicamente diseñados para tratar con este tipo de datos. A estas complejidades metodológicas hay que añadir que, en la mayor parte de los casos, las restricciones logísticas y/o económicas de los proyectos de investigación suelen conllevar que los tamaños muestrales en estas bases de datos con tantas variables sean muy bajos, lo cual no hace sino empeorar las dificultades de análisis, ya que se tienen muchísimas más variables que observaciones. Entre las técnicas desarrolladas para tratar con este tipo de datos podemos encontrar algunas basadas en la penalización de los coeficientes, como lasso o elastic net, otras basadas en técnicas de proyección sobre estructuras latentes como PCA o PLS y otras basadas en árboles o combinaciones de árboles como random forest. Todas estas técnicas funcionan muy bien sobre distintos datos ómicos presentados en forma de matriz (IxJ). Sin embargo, en ocasiones los datos ómicos pueden estar expandidos, por ejemplo, al tomar medidas repetidas en el tiempo sobre los mismos individuos, encontrándonos con estructuras de datos que ya no son matrices, sino arrays tridimensionales o three-way (IxJxK). En estos casos, la mayoría de las técnicas citadas pierden parte de su aplicabilidad, quedando muy pocas opciones viables para el análisis de este tipo de estructuras de datos. Una de las técnicas que sí es útil para el análisis de estructuras three-way es N-PLS, que permite ajustar modelos predictivos razonablemente precisos, así como interpretarlos mediante distintos gráficos. Sin embargo, relacionado con el problema de la escasez de tamaño muestral relativa al desorbitado número de variables, aparece la necesidad de realizar una selección de variables relacionadas con la variable respuesta. Esto es especialmente cierto en el ámbito de la biología y la biomedicina, ya que no solo se quiere poder predecir lo que va a suceder, sino entender por qué sucede, qué variables están implicadas y, a poder ser, no tener que volver a recoger los cientos de miles de variables para realizar una nueva predicción, sino utilizar unas cuantas, las más importantes, para poder diseñar kits predictivos coste/efectivos de utilidad real. Por ello, el objetivo principal de esta tesis es mejorar las técnicas existentes para el análisis de datos ómicos, específicamente las encaminadas a analizar datos three-way, incorporando la capacidad de selección de variables, mejorando la capacidad predictiva y mejorando la interpretabilidad de los resultados obtenidos. Todo ello se implementará además en un paquete de R completamente documentado, que incluirá todas las funciones necesarias para llevar a cabo análisis completos de datos three-way. El trabajo incluido en esta tesis por tanto, consta de una primera parte teórico-conceptual de desarrollo de la idea del algoritmo, así como su puesta a punto, validación y comprobación de su eficacia; de una segunda parte empírico-práctica de comparación de los resultados del algoritmo con otras metodologías de selección de variables existentes, y de una parte adicional de programación y desarrollo de software en la que se presenta todo el desarrollo del paquete de R, su funcionalidad y capacidades de análisis. El desarrollo y validación de la técnica, así como la publicación del paquete de R, ha permitido ampliar las opciones actuales para el análisis[CA] En les últimes dècades els avançaments tecnològics han tingut com a conseqüència la generació d'una creixent quantitat de dades en el camp de la biologia i la biomedicina. A dia d'avui, les anomenades tecnologies "òmiques", com la genòmica, epigenòmica, transcriptòmica o metabolòmica entre altres, produeixen bases de dades amb centenars, milers o fins i tot milions de variables. L'anàlisi de dades 'òmiques' presenta una sèrie de complexitats tant metodolò-giques com computacionals que han portat a una revolució en el desenvolupament de nous mètodes estadístics específicament dissenyats per a tractar amb aquest tipus de dades. A aquestes complexitats metodològiques cal afegir que, en la major part dels casos, les restriccions logístiques i / o econòmiques dels projectes de recerca solen comportar que les magnituts de les mostres en aquestes bases de dades amb tantes variables siguen molt baixes, el que no fa sinó empitjorar les dificultats d'anàlisi, ja que es tenen moltíssimes més variables que observacions Entre les tècniques desenvolupades per a tractar amb aquest tipus de dades podem trobar algunes basades en la penalització dels coeficients, com lasso o elastic net, altres basades en tècniques de projecció sobre estructures latents com PCA o PLS i altres basades en arbres o combinacions d'arbres com random forest. Totes aquestes tècniques funcionen molt bé sobre diferents dades 'òmiques' presentats en forma de matriu (IxJ), però, en ocasions les dades òmiques poden estar expandits, per exemple, cuan ni ha mesures repetides en el temps sobre els mateixos individus, trobant-se amb estructures de dades que ja no són matrius, sinó arrays tridimensionals o three-way (IxJxK). En aquestos casos, la majoria de les tècniques mencionades perden tota o bona part de la seua aplicabilitat, quedant molt poques opcions viables per a l'anàlisi d'aquest tipus d'estructures de dades. Una de les tècniques que sí que és útil per a l'anàlisi d'estructures three-way es N-PLS, que permet ajustar models predictius raonablement precisos, així com interpretar-los mitjançant diferents gràfics. No obstant això, relacionat amb el problema de l'escassetat de mostres relativa al desorbitat nombre de variables, apareix la necessitat de realitzar una selecció de variables relacionades amb la variable resposta. Això és especialment cert en l'àmbit de la biologia i la biomedicina, ja que no només es vol poder predir el que va a succeir, sinó entendre per què passa, quines variables estan implicades i, si pot ser, no haver de tornar a recollir els centenars de milers de variables per realitzar una nova predicció, sinó utilitzar unes quantes, les més importants, per poder dissenyar kits predictius cost / efectius d'utilitat real. Per això, l'objectiu principal d'aquesta tesi és millorar les tècniques existents per a l'anàlisi de dades òmiques, específicament les encaminades a analitzar dades three-way, incorporant la capacitat de selecció de variables, millorant la capacitat predictiva i millorant la interpretabilitat dels resultats obtinguts. Tot això s'implementarà a més en un paquet de R completament documentat, que inclourà totes les funcions necessàries per a dur a terme anàlisis completes de dades three-way. El treball inclòs en aquesta tesi per tant, consta d'una primera part teorica-conceptual de desenvolupament de la idea de l'algoritme, així com la seua posada a punt, validació i comprovació de la seua eficàcia, d'una segona part empíric-pràctica de comparació dels resultats de l'algoritme amb altres metodologies de selecció de variables existents i d'una part adicional de programació i desenvolupament de programació en la qual es presenta tot el desenvolupament del paquet de R, la seua funcionalitat i capacitats d'anàlisi. El desenvolupament i validació de la tècnica, així com la publicació del paquet de R, ha permès ampliar les opcions actuals per a l'anàlis[EN] In the last decades, advances in technology have enabled the gathering of an increasingly amount of data in the field of biology and biomedicine. The so called "-omics" technologies such as genomics, epigenomics, transcriptomics or metabolomics, among others, produce hundreds, thousands or even millions of variables per data set. The analysis of 'omic' data presents different complexities that can be methodological and computational. This has driven a revolution in the development of new statistical methods specifically designed for dealing with these type of data. To this methodological complexities one must add the logistic and economic restrictions usually present in scientific research projects that lead to small sample sizes paired to these wide data sets. This makes the analyses even harder, since there is a problem in having many more variables than observations. Among the methods developed to deal with these type of data there are some based on the penalization of the coefficients, such as lasso or elastic net, others based on projection techniques, such as PCA or PLS, and others based in regression or classification trees and ensemble methods such as random forest. All these techniques work fine when dealing with different 'omic' data in matrix format (IxJ), but sometimes, these IxJ data sets can be expanded by taking, for example, repeated measurements at different time points for each individual, thus having IxJxK data sets that raise more methodological complications to the analyses. These data sets are called three-way data. In this cases, the majority of the cited techniques lose all or a good part of their applicability, leaving very few viable options for the analysis of this type of data structures. One useful tool for analyzing three-way data, when some Y data structure is to be predicted, is N-PLS. N-PLS reduces the inclusion of noise in the models and obtains more robust parameters when compared to PLS while, at the same time, producing easy-to-understand plots. Related to the problem of small sample sizes and exorbitant variable numbers, comes the issue of variable selection. Variable selection is essential for facilitating biological interpretation of the results when analyzing 'omic' data sets. Often, the aim of the study is not only predicting the outcome, but also understanding why it is happening and also what variables are involved. It is also of interest being able to perform new predictions without having to collect all the variables again. Because all of this, the main goal of this thesis is to improve the existing methods for 'omic' data analysis, specifically those for dealing with three-way data, incorporating the ability of variable selection, improving predictive capacity and interpretability of results. All this will be implemented in a fully documented R package, that will include all the necessary functions for performing complete analyses of three-way data. The work included in this thesis consists in a first theoretical-conceptual part where the idea and development of the algorithm takes place, as well as its tuning, validation and assessment of its performance. Then, a second empirical-practical part comes where the algorithm is compared to other variable selection methodologies. Finally, an additional programming and software development part is presented where all the R package development takes place, and its functionality and capabilities are exposed. The development and validation of the technique, as well as the publication of the R package, has opened many future research lines.Hervás Marín, D. (2019). Use of multivariate statistical methods for the analysis of metabolomic data [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/130847TESI

    Designing Optimal Taxes with a Microeconometric Model of Household Labour Supply

    Get PDF
    The purpose of this paper is to present an exercise where we identify optimal income tax rules under the constraint of fixed tax revenue. To this end, we estimate a microeconomic model with 78 parameters that capture heterogeneity in consumption-leisure preferences for singles and couples as well as in job opportunities across individuals based on detailed Norwegian household data for 1994. For any given tax rule, the estimated model can be used to simulate the choices made by single individuals and couples. Those choices are therefore generated by preferences and opportunities that vary across the decision units. Differently from what is common in the literature, we do not rely on a priori theoretical optimal taxation results, but instead we identify optimal tax rules – within a class of 6-parameter piece-wise linear rules - by iteratively running the model until a given social welfare function attains its maximum under the constraint of keeping constant the total net tax revenue. We explore a variety of social welfare functions with differing degree of inequality aversion and also two alternative social welfare principles, namely equality of outcome and equality of opportunity. All the social welfare functions turn out to imply an average tax rate lower than the current 1994 one. Moreover, all the optimal rules imply – with respect to the current rule – lower marginal rates on low and/or average income levels and higher marginal rates on relatively high income levels. These results are partially at odds with the tax reforms that took place in many countries during the last decades. While those reforms embodied the idea of lowering average tax rates, the way to implement it has typically consisted in reducing the top marginal rates. Our results instead suggest to lower average tax rates by reducing marginal rates on low and average income levels and increasing marginal rates on very high income levels.Labour supply, optimal taxation, random utility model, microsimulation.

    Fuzzy Logic in Decision Support: Methods, Applications and Future Trends

    Get PDF
    During the last decades, the art and science of fuzzy logic have witnessed significant developments and have found applications in many active areas, such as pattern recognition, classification, control systems, etc. A lot of research has demonstrated the ability of fuzzy logic in dealing with vague and uncertain linguistic information. For the purpose of representing human perception, fuzzy logic has been employed as an effective tool in intelligent decision making. Due to the emergence of various studies on fuzzy logic-based decision-making methods, it is necessary to make a comprehensive overview of published papers in this field and their applications. This paper covers a wide range of both theoretical and practical applications of fuzzy logic in decision making. It has been grouped into five parts: to explain the role of fuzzy logic in decision making, we first present some basic ideas underlying different types of fuzzy logic and the structure of the fuzzy logic system. Then, we make a review of evaluation methods, prediction methods, decision support algorithms, group decision-making methods based on fuzzy logic. Applications of these methods are further reviewed. Finally, some challenges and future trends are given from different perspectives. This paper illustrates that the combination of fuzzy logic and decision making method has an extensive research prospect. It can help researchers to identify the frontiers of fuzzy logic in the field of decision making

    Optimized adaptive enrichment designs

    Get PDF
    Based on a Bayesian decision theoretic approach, we optimize frequentist single- and adaptive two-stage trial designs for the development of targeted therapies, where in addition to an overall population, a pre-defined subgroup is investigated. In such settings, the losses and gains of decisions can be quantified by utility functions that account for the preferences of different stakeholders. In particular, we optimize expected utilities from the perspectives both of a commercial sponsor, maximizing the net present value, and also of the society, maximizing cost-adjusted expected health benefits of a new treatment for a specific population. We consider single-stage and adaptive two-stage designs with partial enrichment, where the proportion of patients recruited from the subgroup is a design parameter. For the adaptive designs, we use a dynamic programming approach to derive optimal adaptation rules. The proposed designs are compared to trials which are non-enriched (i.e. the proportion of patients in the subgroup corresponds to the prevalence in the underlying population). We show that partial enrichment designs can substantially improve the expected utilities. Furthermore, adaptive partial enrichment designs are more robust than single-stage designs and retain high expected utilities even if the expected utilities are evaluated under a different prior than the one used in the optimization. In addition, we find that trials optimized for the sponsor utility function have smaller sample sizes compared to trials optimized under the societal view and may include the overall population (with patients from the complement of the subgroup) even if there is substantial evidence that the therapy is only effective in the subgroup
    corecore