272 research outputs found

    Instance and feature weighted k-nearest-neighbors algorithm

    Get PDF
    We present a novel method that aims at providing a more stable selection of feature subsets when variations in the training process occur. This is accomplished by using an instance-weighting process -assigning different importances to instances as a preprocessing step to a feature weighting method that is independent of the learner, and then making good use of both sets of computed weigths in a standard Nearest-Neighbours classifier. We report extensive experimentation in well-known benchmarking datasets as well as some challenging microarray gene expression problems. Our results show increases in stability for most subset sizes and most problems, without compromising prediction accuracy.Peer ReviewedPostprint (published version

    Exploiting the accumulated evidence for gene selection in microarray gene expression data

    Get PDF
    Machine Learning methods have of late made signicant efforts to solving multidisciplinary problems in the field of cancer classification using microarray gene expression data. Feature subset selection methods can play an important role in the modeling process, since these tasks are characterized by a large number of features and a few observations, making the modeling a non-trivial undertaking. In this particular scenario, it is extremely important to select genes by taking into account the possible interactions with other gene subsets. This paper shows that, by accumulating the evidence in favour (or against) each gene along the search process, the obtained gene subsets may constitute better solutions, either in terms of predictive accuracy or gene size, or in both. The proposed technique is extremely simple and applicable at a negligible overhead in cost.Postprint (published version

    Towards more reliable feature evaluations for classification

    Get PDF
    In this thesis we study feature subset selection and feature weighting algorithms. Our aim is to make their output more stable and more useful when used to train a classifier. We begin by defining the concept of stability and selecting a measure to asses the output of the feature selection process. Then we study different sources of instability and propose modifications of classic algorithms that improve their stability. We propose a modification of wrapper algorithms that take otherwise unused information into account to overcome an intrinsic source of instability for this algorithms: the feature assessment being a random variable that depends on the particular training subsample. Our version accumulates the evaluation results of each feature at each iteration to average out the effect of the randomness. Another novel proposal is to make wrappers evaluate the remainder set of features at each step to overcome another source of instability: randomness of the algorithms themselves. In this case, by evaluating the non-selected set of features, the initial choice of variables is more educated. These modifications do not bring a great amount of computational overhead and deliver better results, both in terms of stability and predictive power. We finally tackle another source of instability: the differential contribution of the instances to feature assessment. We present a framework to combine almost any instance weighting algorithm with any feature weighting one. Our combination of algorithms deliver more stable results for the various feature weighting algorithms we have tested. Finally, we present a deeper integration of instance weighting with feature weighting by modifying the Simba algorithm, that delivers even better results in terms of stabilityEl focus d'aquesta tesi és mesurar, estudiar i millorar l’estabilitat d’algorismes de selecció de subconjunts de variables (SSV) i avaluació de variables (AV) en un context d'aprenentatge supervisat. El propòsit general de la SSV en un context de classificació és millorar la precisió de la predicció. Nosaltres afirmem que hi ha un altre gran repte en SSV i AV: l’estabilitat des resultats. Un cop triada una mesura d’estabilitat entre les estudiades, proposem millores d’un algorisme molt popular: el Relief. Analitzem diferents mesures de distància a més de la original i estudiem l'efecte que tenen sobre la precisió, la detecció de la redundància i l'estabilitat. També posem a prova diferents maneres d’utilitzar els pesos que es calculen a cada pas per influir en el càlcul de distàncies d’una manera similar a com ho fa un altre algorisme d'AV: el Simba. També millorem la seva estabilitat incrementant la contribució dels pesos de les variables en el càlcul de la distància a mesura que avança el temps per minimitzar l’impacte de la selecció aleatòria de les primeres instàncies. Pel què fa als algorismes embolcall, (wrappers) els modifiquem per tenir en compte informació que era ignorada per superar una font intrínseca d’inestabilitat: el fet que l’avaluació de les variables és una variable aleatòria que depèn del subconjunt de dades utilitzat. La nostra versió acumula els resultats en cada iteració per compensar l’efecte aleatori mentre que els originals descarten tota la informació recollida sobre cada variable en una determinada iteració i comencen de nou a la següent, donant lloc a resultats més inestables. Una altra proposta és fer que aquests wrappers avaluïn el subconjunt de variables no seleccionat en cada iteració per evitar una altra font d’inestabilitat. Aquestes modificacions no comporten un gran augment de cost computacional i els seus resultats són més estables i més útils per un classificador. Finalment proposem ponderar la contribució de cada instància en l’AV. Poden existir observacions atípiques que no s'haurien de tenir tant en compte com les altres; si estem intentant predir un càncer utilitzant informació d’anàlisis genètics, hauríem de donar menys credibilitat a les dades obtingudes de persones exposades a grans nivells de radiació tot i que no tenir informació sobre aquesta exposició. Els mètodes d’avaluació d’instàncies (AI) pretenen identificar aquests casos i assignar-los pesos més baixos. Varis autors han treballat en esquemes d’AI per millorar la SSV però no hi ha treball previ en la combinació d'AI amb AV. Presentem un marc de treball per combinar algorismes d'AI amb altres d'AV. A més proposem un nou algorisme d’AI basat en el concepte de marge de decisió que utilitzen alguns algorismes d’AV. Amb aquest marc de treball hem posat a prova les modificacions contra les versions originals utilitzant varis jocs de dades del repositori UCI, de xips d'ADN i els utilitzats en el desafiament de SSV del NIPS-2003. Les nostres combinacions d'algorismes d'avaluació d'instàncies i atributs ens aporten resultats més estables per varis algorismes d'avaluació d'atributs que hem estudiat. Finalment, presentem una integració més profunda de l'avaluació d'instàncies amb l'algorisme de selecció de variables Simba consistent a utilitzar els pesos de les instàncies per ponderar el càlcul de les distàncies, amb la que obtenim resultats encara millors en termes d’estabilitat. Les contribucions principals d’aquesta tesi son: (i) aportar un marc de treball per combinar l'AI amb l’AV, (ii) una revisió de les mesures d’estabilitat de SSV, (iii) diverses modificacions d’algorismes de SSV i AV que milloren la seva estabilitat i el poder predictiu del subconjunt de variables seleccionats; sense un augment significatiu del seu cost computacional, (iv) una definició teòrica de la importància d'una variable i (v) l'estudi de la relació entre l'estabilitat de la SSV i la redundància de les variables.Postprint (published version

    Trade and Unemployment: What Do the Data Say?

    Get PDF
    This paper documents a robust empirical regularity: in the long-run, higher trade openness is causally associated to a lower structural rate of unemployment. We establish this fact using: (i) panel data from 20 OECD countries, (ii) cross-sectional data on a larger set of countries. The time structure of the panel data allows us to deal with endogeneity concerns, whereas cross-sectional data make it possible to instrument openness by its geographical component. In both setups, we carefully purge the data from business cycle effects, include a host of institutional and geographical variables, and control for within-country trade. Our main finding is robust to various definitions of unemployment rates and openness measures. The preferred specification suggests that a 10 percent increase in total trade openness reduces unemployment by about one percentage point. Moreover, we show that openness affects unemployment mainly through its effect on TFP and that labor market institutions do not appear to condition the effect of openness.international trade, real openness, unemployment, GMM models, IV estimation

    Globalization and Labor Market Outcomes: Wage Bargaining, Search Frictions, and Firm Heterogeneity

    Get PDF
    We introduce search unemployment à la Pissarides into Melitz’ (2003) model of trade with heterogeneous firms. We allow wages to be individually or collectively bargained and analytically solve for the equilibrium. We find that the selection effect of trade influences labor market outcomes. Trade liberalization lowers unemployment and raises real wages as long as it improves aggregate productivity net of transport costs. We show that this condition is likely to be met by a reduction in variable trade costs or the entry of new trading countries. On the other hand, the gains from a reduction in fixed market access costs are more elusive. Calibrating the model shows that the positive impact of trade openness on employment is significant when wages are bargained at the individual level but much smaller when wages are bargained at the collective level.trade liberalization, unemployment, search model, firm heterogeneity

    Trade and Unemployment: What do the data say?

    Get PDF
    This paper documents a robust empirical regularity: in the long-run, higher trade openness is causally associated to a lower structural rate of unemployment. We es- tablish this fact using: (i) panel data from 20 OECD countries, (ii) cross-sectional data on a larger set of countries. The time structure of the panel data allows to deal with endogeneity concerns, whereas cross-sectional data make it possible to instru- ment openness by its geographical component. In both setups, we carefully purge the data from business cycle effects, include a host of institutional and geographical variables, and control for within-country trade. Our main finding is robust to various definitions of unemployment rates and openness measures. The preferred specification suggests that a 10 percent increase in total trade openness reduces unemployment by about one percentage point. Moreover, we show that openness affects unemployment mainly through its effect on TFP and that labor market institutions do not appear to condition the effect of openness.international trade, real openness, unemployment, GMM models, IV esti- mation.

    Cross-connection management specialisation for WDM-OTN's

    Get PDF
    WDM optical transport networks (WDM-OTN) will use new all-optical nodes that will perform functions in the optical domain, using wavelength as a new network resource. This paper presents a new approach for the optical cross-connect (OXC) routing operation that takes into account a diminished connection capability.Peer ReviewedPostprint (published version

    Remainder subset awareness for feature subset selection

    Get PDF
    Feature subset selection has become more and more a common topic of research. This popularity is partly due to the growth in the number of features and application domains. The family of algorithms known as plus-l-minus-r and its immediate derivatives (like forward selection) are very popular and often the only viable alternative when used in wrapper mode. In consequence, it is of the greatest importance to take the most of every evaluation of the inducer, which is normally the more costly part. In this paper, a technique is proposed that takes into account the inducer evaluation both in the current subset and in the remainder subset (its complementary set) and is applicable to any sequential subset selection algorithm at a reasonable overhead in cost. Its feasibility is demonstrated on a series of benchmark data sets.Peer ReviewedPostprint (published version
    corecore