798 research outputs found

    A new dominance intensity method to deal with ordinal information about a DM's preferences within MAVT

    Get PDF
    Dominance measuring methods are a new approach to deal with complex decision-making problems with imprecise information. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in dirent ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we propose a new dominance measuring method to deal with ordinal information about decision-maker preferences in both weights and component utilities. It takes advantage of the centroid of the polytope delimited by ordinal information and builds triangular fuzzy numbers whose distances to the crisp value 0 constitute the basis for the de?nition of a dominance intensity measure. Monte Carlo simulation techniques have been used to compare the performance of this method with other existing approaches

    Adding feasibility constraints to a ranking rule under a monotonicity constraint

    Get PDF
    We propose a new point of view in the long-standing problem where several voters have expressed a linear order relation (or ranking) over a set of candidates. For a ranking a > b > c to represent a group's opinion, it would be logical that the strength with which a > c is supported should not be less than the strength with which either a > b or b > c is supported. This intuitive property can be considered a monotonicity constraint, and has been addressed before. We extend previous approaches in the following way: as the voters are expressing linear orders, we can take the number of candidates between two candidates to be a measure of the degree to which one candidate is preferred to the other. In this way, intensity of support is both counted as the number of voters who indicate a > c is true, as well as the distance between a and c in these voters' rankings. The resulting distributions serve as input for a natural ranking rule that is based on stochastic monotonicity and stochastic dominance. Adapting the previous methodology turns out to be non-trivial once we add some natural feasibility constraints

    Reliable statistical modeling of weakly structured information

    Get PDF
    The statistical analysis of "real-world" data is often confronted with the fact that most standard statistical methods were developed under some kind of idealization of the data that is often not adequate in practical situations. This concerns among others i) the potentially deficient quality of the data that can arise for example due to measurement error, non-response in surveys or data processing errors and ii) the scale quality of the data, that is idealized as "the data have some clear scale of measurement that can be uniquely located within the scale hierarchy of Stevens (or that of Narens and Luce or Orth)". Modern statistical methods like, e.g., correction techniques for measurement error or robust methods cope with issue i). In the context of missing or coarsened data, imputation techniques and methods that explicitly model the missing/coarsening process are nowadays wellestablished tools of refined data analysis. Concerning ii) the typical statistical viewpoint is a more pragmatical one, in case of doubt one simply presumes the strongest scale of measurement that is clearly "justified". In more complex situations, like for example in the context of the analysis of ranking data, statisticians often simply do not worry about purely measurement theoretic reservations too much, but instead embed the data structure in an appropriate, easy to handle space, like e.g. a metric space and then use all statistical tools available for this space. Against this background, the present cumulative dissertation tries to contribute from different perspectives to the appropriate handling of data that challenge the above-mentioned idealizations. A focus here is on the one hand on analysis of interval-valued and set-valued data within the methodology of partial identification, and on the other hand on the analysis of data with values in a partially ordered set (poset-valued data). Further tools of statistical modeling treated in the dissertation are necessity measures in the context of possibility theory and concepts of stochastic dominance for poset-valued data. The present dissertation consists of 8 contributions, which will be detailedly discussed in the following sections: Contribution 1 analyzes different identification regions for partially identified linear models under interval-valued responses and develops a further kind of identification region (as well as a corresponding estimator). Estimates for the identifcation regions are compared to each other and also to classical statistical approaches for a data set on wine quality. Contribution 2 deals with logistic regression under coarsened responses, analyzes point-identifying assumptions and develops likelihood-based estimators for the identified set. The methods are illustrated with data of a wave of the panel study "Labor Market and Social Security" (PASS). Contribution 3 analyzes the combinatorial structure of the extreme points and the edges of a polytope (called credal set or core in the literature) that plays a crucial role in imprecise probability theory. Furthermore, an efficient algorithm for enumerating all extreme points is given and compared to existing standard methods. Contribution 4 develops a quantile concept for data or random variables with values in a complete lattice, which is applied in Contribution 5 to the case of ranking data in the context of a data set on the wisdom of the crowd phenomena. In Contribution 6 a framework for evaluating the quality of different aggregation functions of Social Choice Theory is developed, which enables analysis of quality in dependence of group specific homogeneity. In a simulation study, selected aggregation functions, including an aggregation function based on the concepts of Contribution 4 and Contribution 5, are analyzed. Contribution 7 supplies a linear program that allows for detecting stochastic dominance for poset-valued random variables, gives proposals for inference and regularization, and generalizes the approach to the general task of optimizing a linear function on a closure system. The generality of the developed methods is illustrated with data examples in the context of multivariate inequality analysis, item impact and differential item functioning in the context of item response theory, analyzing distributional differences in spatial statistics and guided regularization in the context of cognitive diagnosis models. Contribution 8 uses concepts of stochastic dominance to establish a descriptive approach for a relational analysis of person ability and item difficulty in the context of multidimensional item response theory. All developed methods have been implemented in the language R ([R Development Core Team, 2014]) and are available from the author upon request. The application examples corroborate the usefulness of weak types of statistical modeling examined in this thesis, which, beyond their flexibility to deal with many kinds of data deficiency, can still lead to informative substance matter conclusions that are then more reliable due to the weak modeling.Die statistische Analyse real erhobener Daten sieht sich oft mit der Tatsache konfrontiert, dass ĂŒbliche statistische Standardmethoden unter einer starken Idealisierung der Datensituation entwickelt wurden, die in der Praxis jedoch oft nicht angemessen ist. Dies betrifft i) die möglicherweise defizitĂ€re QualitĂ€t der Daten, die beispielsweise durch Vorhandensein von Messfehlern, durch systematischen Antwortausfall im Kontext sozialwissenschaftlicher Erhebungen oder auch durch Fehler wĂ€hrend der Datenverarbeitung bedingt ist und ii) die SkalenqualitĂ€t der Daten an sich: Viele Datensituationen lassen sich nicht in die einfachen Skalenhierarchien von Stevens (oder die von Narens und Luce oder Orth) einordnen. Modernere statistische Verfahren wie beispielsweise Messfehlerkorrekturverfahren oder robuste Methoden versuchen, der Idealisierung der DatenqualitĂ€t im Nachhinein Rechnung zu tragen. Im Zusammenhang mit fehlenden bzw. intervallzensierten Daten haben sich Imputationsverfahren zur VervollstĂ€ndigung fehlender Werte bzw. Verfahren, die den Entstehungprozess der vergröberten Daten explizit modellieren, durchgesetzt. In Bezug auf die SkalenqualitĂ€t geht die Statistik meist eher pragmatisch vor, im Zweifelsfall wird das niedrigste Skalenniveau gewĂ€hlt, das klar gerechtfertigt ist. In komplexeren multivariaten Situationen, wie beispielsweise der Analyse von Ranking-Daten, die kaum noch in das Stevensche "Korsett" gezwungen werden können, bedient man sich oft der einfachen Idee der Einbettung der Daten in einen geeigneten metrischen Raum, um dann anschließend alle Werkzeuge metrischer Modellierung nutzen zu können. Vor diesem Hintergrund hat die hier vorgelegte kumulative Dissertation deshalb zum Ziel, aus verschiedenen Blickwinkeln BeitrĂ€ge zum adĂ€quaten Umgang mit Daten, die jene Idealisierungen herausfordern, zu leisten. Dabei steht hier vor allem die Analyse intervallwertiger bzw. mengenwertiger Daten mittels partieller Identifikation auf der Seite defzitĂ€rer DatenqualitĂ€t im Vordergrund, wĂ€hrend bezĂŒglich SkalenqualitĂ€t der Fall von verbandswertigen Daten behandelt wird. Als weitere Werkzeuge statistischer Modellierung werden hier insbesondere Necessity-Maße im Rahmen der Imprecise Probabilities und Konzepte stochastischer Dominanz fĂŒr Zufallsvariablen mit Werten in einer partiell geordneten Menge betrachtet. Die vorliegende Dissertation umfasst 8 BeitrĂ€ge, die in den folgenden Kapiteln nĂ€her diskutiert werden: Beitrag 1 analysiert verschiedene Identifikationsregionen fĂŒr partiell identifizierte lineare Modelle unter intervallwertig beobachteter Responsevariable und schlĂ€gt eine neue Identifikationsregion (inklusive SchĂ€tzer) vor. FĂŒr einen Datensatz, der die QualitĂ€t von verschiedenen Rotweinen, gegeben durch ExpertInnenurteile, in AbhĂ€ngigkeit von verschiedenen physikochemischen Eigenschaften beschreibt, werden SchĂ€tzungen fĂŒr die Identifikationsregionen analysiert. Die Ergebnisse werden ebenfalls mit den Ergebissen klassischer Methoden fĂŒr Intervalldaten verglichen. Beitrag 2 behandelt logistische Regression unter vergröberter Responsevariable, analysiert punktidentifizierende Annahmen und entwickelt likelihoodbasierte SchĂ€tzer fĂŒr die entsprechenden Identifikationsregionen. Die Methode wird mit Daten einer Welle der Panelstudie "Arbeitsmarkt und Soziale Sicherung" (PASS) illustriert. Beitrag 3 analysiert die kombinatorische Struktur der Extrempunkte und der Kanten eines Polytops (sogenannte Struktur bzw. Kern einer Intervallwahrscheinlichkeit bzw. einer nicht-additiven Mengenfunktion), das von wesentlicher Bedeutung in vielen Gebieten der Imprecise Probability Theory ist. Ein effizienter Algorithmus zur Enumeration aller Extrempunkte wird ebenfalls gegeben und mit existierenden Standardenumerationsmethoden verglichen. In Beitrag 4 wird ein Quantilkonzept fĂŒr verbandswertige Daten bzw. Zufallsvariablen vorgestellt. Dieses Quantilkonzept wird in Beitrag 5 auf Ranking-Daten im Zusammenhang mit einem Datensatz, der das "Weisheit der Vielen"-PhĂ€nomen untersucht, angewendet. Beitrag 6 entwickelt eine Methode zur probabilistischen Analyse der "QualitĂ€t" verschiedener Aggregationsfunktionen der Social Choice Theory. Die Analyse wird hier in AbhĂ€angigkeit der HomogenitĂ€t der betrachteten Gruppen durchgefĂŒhrt. In einer simulationsbasierten Studie werden exemplarisch verschiedene klassische Aggregationsfunktionen, sowie eine neue Aggregationsfunktion basierend auf den BeitrĂ€gen 4 und 5, verglichen. Beitrag 7 stellt einen Ansatz vor, um das Vorliegen stochastischer Dominanz zwischen zwei Zufallsvariablen zu ĂŒberprĂŒfen. Der Anstaz nutzt Techniken linearer Programmierung. Weiterhin werden VorschlĂ€ge fĂŒr statistische Inferenz und Regularisierung gemacht. Die Methode wird anschließend auch auf den allgemeineren Fall des Optimierens einer linearen Funktion auf einem HĂŒllensystem ausgeweitet. Die flexible Anwendbarkeit wird durch verschiedene Anwendungsbeispiele illustriert. Beitrag 8 nutzt Ideen stochastischer Dominanz, um DatensĂ€tze der multidimensionalen Item Response Theory relational zu analysieren, indem Paare von sich gegenseitig empirisch stĂŒtzenden FĂ€higkeitsrelationen der Personen und Schwierigkeitsrelationen der Aufgaben entwickelt werden. Alle entwickelten Methoden wurden in R ([R Development Core Team, 2014]) implementiert. Die Anwendungsbeispiele zeigen die FlexibilitĂ€t der hier betrachteten Methoden relationaler bzw. "schwacher" Modellierung insbesondere zur Behandlung defizitĂ€rer Daten und unterstreichen die Tatsache, dass auch mit Methoden schwacher Modellierung oft immer noch nichttriviale substanzwissenschaftliche RĂŒckschlĂŒsse möglich sind, die aufgrund der inhaltlich vorsichtigeren Modellierung dann auch sehr viel stĂ€rker belastbar sind

    Optimal Siting of Electric Vehicle Charging Stations Using Pythagorean Fuzzy VIKOR Approach

    Get PDF
    Site selection for electric vehicle charging stations (EVCSs) is the process of determining the most suitable location among alternatives for the construction of charging facilities for electric vehicles. It can be regarded as a complex multicriteria decision-making (MCDM) problem requiring consideration of multiple conflicting criteria. In the real world, it is often hard or impossible for decision makers to estimate their preferences with exact numerical values. Therefore, Pythagorean fuzzy set theory has been frequently used to handle imprecise data and vague expressions in practical decision-making problems. In this paper, a Pythagorean fuzzy VIKOR (PF-VIKOR) approach is developed for solving the EVCS site selection problems, in which the evaluations of alternatives are given as linguistic terms characterized by Pythagorean fuzzy values (PFVs). Particularly, the generalized Pythagorean fuzzy ordered weighted standardized distance (GPFOWSD) operator is proposed to calculate the utility and regret measures for ranking alternative sites. Finally, a practical example in Shanghai, China, is included to demonstrate the proposed EVCS sitting model, and the advantages are highlighted by comparing the results with other relevant methods.Peer Reviewe

    Robust schedules for tardiness optimization in job shop with interval uncertainty

    Get PDF
    This paper addresses a variant of the job shop scheduling problem with total tardiness minimization where task durations and due dates are uncertain. This uncertainty is modelled with intervals. Different ranking methods for intervals are considered and embedded into a genetic algorithm. A new robustness measure is proposed to compare the different ranking methods and assess their capacity to predict ‘expected delays’ of jobs. Experimental results show that dealing with uncertainty during the optimization process yields more robust solutions. A sensitivity analysis also shows that the robustness of the solutions given by the solving method increases when the uncertainty grows.This research has been supported by the Spanish Government under research grants PID2019-106263RB-I00 and TIN2017-87600-P
    • 

    corecore