3,464 research outputs found

    Dominance-based Rough Set Approach, basic ideas and main trends

    Full text link
    Dominance-based Rough Approach (DRSA) has been proposed as a machine learning and knowledge discovery methodology to handle Multiple Criteria Decision Aiding (MCDA). Due to its capacity of asking the decision maker (DM) for simple preference information and supplying easily understandable and explainable recommendations, DRSA gained much interest during the years and it is now one of the most appreciated MCDA approaches. In fact, it has been applied also beyond MCDA domain, as a general knowledge discovery and data mining methodology for the analysis of monotonic (and also non-monotonic) data. In this contribution, we recall the basic principles and the main concepts of DRSA, with a general overview of its developments and software. We present also a historical reconstruction of the genesis of the methodology, with a specific focus on the contribution of Roman S{\l}owi\'nski.Comment: This research was partially supported by TAILOR, a project funded by European Union (EU) Horizon 2020 research and innovation programme under GA No 952215. This submission is a preprint of a book chapter accepted by Springer, with very few minor differences of just technical natur

    White Male Privilege in the Perpetration of Sexual Violence: An Exploratory Study of Young Adults

    Get PDF
    This exploratory study examined how racism and sexism among privileged White men intersect and vary to influence factors associated with sexual violence (SV) perpetration. Specifically, this study examined the interconnection of racism and sexism via latent profile analysis (LPA), how profile groups differentially predict attitudinal responses to a victim in an acquaintance rape SV vignette regarding victim blame and rape minimization (e.g., rape myth acceptance), rape proclivity, and bystander intentions to help, and whether or not these attitudes change based on the victim’s race (i.e., White or Black). The sample included 628 young adult, White, heterosexual, cis-gender men currently living in the United States. Results indicated that three profiles of racist and sexist attitudes (e.g., ambivalent, low racism/sexism, and high racism/sexism groups) emerged for the sample and differentially predicted responses to the acquaintance rape vignette regarding victim blame, rape minimization, rape proclivity, and bystander intentions. Individuals with profiles high in racism and sexism were more likely to endorse higher victim blame, rape minimization, rape proclivity, and lower bystander intentions than individuals with profiles low in racism and sexism. Individuals in the ambivalent group reported similar levels of rape minimization as the high racism/sexism group, though they reported lower levels of rape proclivity, victim blame, and bystander intentions to help. These relationships between profiles and outcomes were not moderated by the victim race. In other words, participants did not respond to Black victims with more victim blame, rape minimization, rape proclivity and less bystander intentions than White victims. Implications for advancing our understanding of how to better prevent sexual violence and intervene in its effects are discussed as it relates to future research, theory, practice and policy

    Temporal aggregation, systematic sampling, and the Hodrick-Prescott filter

    Get PDF
    Maravall and del Río (2001), analized the time aggregation properties of the Hodrick-Prescott (HP) filter, which decomposes a time series into trend and cycle, for the case of annual, quarterly, and monthly data, and showed that aggregation of the disaggregate component cannot be obtained as the exact result from direct application of an HP filter to the aggregate series. The present paper shows how, using several criteria, one can find HP decompositions for different levels of aggregation that provide similar results. We use as the main criterion for aggregation the preservation of the period associated with the frequency for which the filter gain is ½this criterion is intuitive and easy to apply. It is shown that the Ravn and Uhlig (2002) empirical rule turns out to be a first-order approximation to our criterion, and that alternative —more complex— criteria yield similar results. Moreover, the values of the parameter λ of the HP filter, that provide results that are approximately consistent under aggregation, are considerably robust with respect to the ARIMA model of the series. Aggregation is seen to work better for the case of temporal aggregation than for systematic sampling. Still a word of caution is made concerning the desirability of exact aggregation consistency. The paper concludes with a clarification having to do with the questionable spuriousness of the cycles obtained with HP filte

    Learning from Partial Labels

    Get PDF
    We address the problem of partially-labeled multiclass classification, where instead of a single label per instance, the algorithm is given a candidate set of labels, only one of which is correct. Our setting is motivated by a common scenario in many image and video collections, where only partial access to labels is available. The goal is to learn a classifier that can disambiguate the partially-labeled training instances, and generalize to unseen data. We define an intuitive property of the data distribution that sharply characterizes the ability to learn in this setting and show that effective learning is possible even when all the data is only partially labeled. Exploiting this property of the data, we propose a convex learning formulation based on minimization of a loss function appropriate for the partial label setting. We analyze the conditions under which our loss function is asymptotically consistent, as well as its generalization and transductive performance. We apply our framework to identifying faces culled from web news sources and to naming characters in TV series and movies; in particular, we annotated and experimented on a very large video data set and achieve 6% error for character naming on 16 episodes of the TV series Lost

    Consistency conditions for regulatory analysis of financial institutions: a comparison of frontier efficiency methods

    Get PDF
    We propose a set of consistency conditions that frontier efficiency measures should meet to be most useful for regulatory analysis or other purposes. The efficiency estimates should be consistent in their efficiency levels, rankings, and identification of best and worst firms, consistent over time and with competitive conditions in the market, and consistent with standard nonfrontier measures of performance. We provide evidence on these conditions by evaluating and comparing efficiency estimates on U.S. bank efficiency from variants of all four of the major approaches -- DEA, SFA, TFA, and DFA -- and find mixed results.Financial institutions ; Bank supervision

    All Thinking is 'Wishful' Thinking

    Get PDF
    Motivation to engage in any epistemic behavior can be decomposed into two basic types that emerge in various guises across different disciplines and areas of study. The first basic dimension refers to a desire to approach versus avoid nonspecific certainty, which has epistemic value. It describes a need for an unambiguous, precise answer to a question, regardless of that answer’s specific content. Second basic dimension refers to a desire to approach versus avoid specific certainty, which has instrumental value. It concerns a need for the specific content of one’s beliefs and prior preferences. Together, they explain diverse epistemic behaviors, such as seeking, avoiding, and biasing new information and revising and updating, versus protecting, one’s beliefs, when confronted with new evidence. The relative strength of these motivational components determines the form of (Bayes optimal) epistemic behavior that follows
    • …
    corecore