2,273 research outputs found

    Unsupervised Model Selection for Time-series Anomaly Detection

    Full text link
    Anomaly detection in time-series has a wide range of practical applications. While numerous anomaly detection methods have been proposed in the literature, a recent survey concluded that no single method is the most accurate across various datasets. To make matters worse, anomaly labels are scarce and rarely available in practice. The practical problem of selecting the most accurate model for a given dataset without labels has received little attention in the literature. This paper answers this question i.e. Given an unlabeled dataset and a set of candidate anomaly detectors, how can we select the most accurate model? To this end, we identify three classes of surrogate (unsupervised) metrics, namely, prediction error, model centrality, and performance on injected synthetic anomalies, and show that some metrics are highly correlated with standard supervised anomaly detection performance metrics such as the F1F_1 score, but to varying degrees. We formulate metric combination with multiple imperfect surrogate metrics as a robust rank aggregation problem. We then provide theoretical justification behind the proposed approach. Large-scale experiments on multiple real-world datasets demonstrate that our proposed unsupervised approach is as effective as selecting the most accurate model based on partially labeled data.Comment: Accepted at International Conference on Learning Representations (ICLR) 2023 with a notable-top-25% recommendation. Reviewer, AC and author discussion available at https://openreview.net/forum?id=gOZ_pKANaP

    Toward the development of a model to estimate the readability of credentialing-examination materials

    Full text link
    The purpose of this study was to develop a set of procedures to establish readability, including an equation, that accommodates the multiple-choice item format and occupational-specific language related to credentialing examinations. The procedures and equation should be appropriate for learning materials, examination materials, and occupational materials. To this end, variance in readability estimates accounted for by combinations of semantic and syntactic variables were explored, a method was devised to accommodate occupational-specific vocabulary, and new-model readability formulas were created and calibrated. Existing readability formulas were then recalibrated with the same materials used to calibrate the new-model formulas. The new-model and recalibrated formulas were then applied to sample items extracted from a professional licensing examination and the results were compared

    Unsupervised learning for anomaly detection in Australian medical payment data

    Full text link
    Fraudulent or wasteful medical insurance claims made by health care providers are costly for insurers. Typically, OECD healthcare organisations lose 3-8% of total expenditure due to fraud. As Australia’s universal public health insurer, Medicare Australia, spends approximately A34billionperannumontheMedicareBenefitsSchedule(MBS)andPharmaceuticalBenefitsScheme,wastedspendingofA 34 billion per annum on the Medicare Benefits Schedule (MBS) and Pharmaceutical Benefits Scheme, wasted spending of A1–2.7 billion could be expected.However, fewer than 1% of claims to Medicare Australia are detected as fraudulent, below international benchmarks. Variation is common in medicine, and health conditions, along with their presentation and treatment, are heterogenous by nature. Increasing volumes of data and rapidly changing patterns bring challenges which require novel solutions. Machine learning and data mining are becoming commonplace in this field, but no gold standard is yet available. In this project, requirements are developed for real-world application to compliance analytics at the Australian Government Department of Health and Aged Care (DoH), covering: unsupervised learning; problem generalisation; human interpretability; context discovery; and cost prediction. Three novel methods are presented which rank providers by potentially recoverable costs. These methods used association analysis, topic modelling, and sequential pattern mining to provide interpretable, expert-editable models of typical provider claims. Anomalous providers are identified through comparison to the typical models, using metrics based on costs of excess or upgraded services. Domain knowledge is incorporated in a machine-friendly way in two of the methods through the use of the MBS as an ontology. Validation by subject-matter experts and comparison to existing techniques shows that the methods perform well. The methods are implemented in a software framework which enables rapid prototyping and quality assurance. The code is implemented at the DoH, and further applications as decision-support systems are in progress. The developed requirements will apply to future work in this fiel

    Is economic value added (eva) the best way to assemble a portfolio?

    Get PDF
    In search of a better investment metric, researchers began to study Economic Value Added, or EVA, which was introduced in 1991 by Stern Stewart & Co in their book, The Quest for Value (Turvey, 2000). Stern Stewart & Co devised EVA as a better alternative to evaluate investment projects within the corporate finance field, later to be considered for use as a performance metric for investor use. A wide array of multinational corporations, such as Coca-Cola, Briggs and Stratton, and AT&T adopted the EVA method, which led to EVA\u27s worldwide acclaim. Several points in the study reveal that EVA does not offer less risk, higher returns, and more adaptability for an investor. In fact, EVA underperformed the traditional portfolio performance metrics in key measurements including mean returns, and confidence intervals. EVA is a difficult performance metric to calculate, with several complex components that can be calculated in several different ways such as NOPAT, cost of equity, and cost of debt. Any information that is inaccurate or lacking can significantly impact the outcomes. Traditional performance metrics, on the other hand, such as ROA, ROE, and E/P are simple to calculate with few components, and only one way to calculate them

    VIPSCAL: A combined vector ideal point model for preference data

    Get PDF
    In this paper, we propose a new model that combines the vector model and the ideal point model of unfolding. An algorithm is developed, called VIPSCAL, that minimizes the combined loss both for ordinal and interval transformations. As such, mixed representations including both vectors and ideal points can be obtained but the algorithm also allows for the unmixed cases, giving either a complete ideal pointanalysis or a complete vector analysis. On the basis of previous research, the mixed representations were expected to be nondegenerate. However, degenerate solutions still occurred as the common belief that distant ideal points can be represented by vectors does not hold true. The occurrence of these distant ideal points was solved by adding certain length and orthogonality restrictions on the configuration. The restrictions can be used both for the mixed and unmixed cases in several ways such that a number of different models can be fitted by VIPSCAL

    VIPSCAL: A combined vector ideal point model for preference data

    Get PDF
    In this paper, we propose a new model that combines the vector model and the ideal point model of unfolding. An algorithm is developed, called VIPSCAL, that minimizes the combined loss both for ordinal and interval transformations. As such, mixed representations including both vectors and ideal points can be obtained but the algorithm also allows for the unmixed cases, giving either a complete ideal pointanalysis or a complete vector analysis. On the basis of previous research, the mixed representations were expected to be nondegenerate. However, degenerate solutions still occurred as the common belief that distant ideal points can be represented by vectors does not hold true. The occurrence of these distant ideal points was solved by adding certain length and orthogonality restrictions on the configuration. The restrictions can be used both for the mixed and unmixed cases in several ways such that a number of different models can be fitted by VIPSCAL

    Reliable statistical modeling of weakly structured information

    Get PDF
    The statistical analysis of "real-world" data is often confronted with the fact that most standard statistical methods were developed under some kind of idealization of the data that is often not adequate in practical situations. This concerns among others i) the potentially deficient quality of the data that can arise for example due to measurement error, non-response in surveys or data processing errors and ii) the scale quality of the data, that is idealized as "the data have some clear scale of measurement that can be uniquely located within the scale hierarchy of Stevens (or that of Narens and Luce or Orth)". Modern statistical methods like, e.g., correction techniques for measurement error or robust methods cope with issue i). In the context of missing or coarsened data, imputation techniques and methods that explicitly model the missing/coarsening process are nowadays wellestablished tools of refined data analysis. Concerning ii) the typical statistical viewpoint is a more pragmatical one, in case of doubt one simply presumes the strongest scale of measurement that is clearly "justified". In more complex situations, like for example in the context of the analysis of ranking data, statisticians often simply do not worry about purely measurement theoretic reservations too much, but instead embed the data structure in an appropriate, easy to handle space, like e.g. a metric space and then use all statistical tools available for this space. Against this background, the present cumulative dissertation tries to contribute from different perspectives to the appropriate handling of data that challenge the above-mentioned idealizations. A focus here is on the one hand on analysis of interval-valued and set-valued data within the methodology of partial identification, and on the other hand on the analysis of data with values in a partially ordered set (poset-valued data). Further tools of statistical modeling treated in the dissertation are necessity measures in the context of possibility theory and concepts of stochastic dominance for poset-valued data. The present dissertation consists of 8 contributions, which will be detailedly discussed in the following sections: Contribution 1 analyzes different identification regions for partially identified linear models under interval-valued responses and develops a further kind of identification region (as well as a corresponding estimator). Estimates for the identifcation regions are compared to each other and also to classical statistical approaches for a data set on wine quality. Contribution 2 deals with logistic regression under coarsened responses, analyzes point-identifying assumptions and develops likelihood-based estimators for the identified set. The methods are illustrated with data of a wave of the panel study "Labor Market and Social Security" (PASS). Contribution 3 analyzes the combinatorial structure of the extreme points and the edges of a polytope (called credal set or core in the literature) that plays a crucial role in imprecise probability theory. Furthermore, an efficient algorithm for enumerating all extreme points is given and compared to existing standard methods. Contribution 4 develops a quantile concept for data or random variables with values in a complete lattice, which is applied in Contribution 5 to the case of ranking data in the context of a data set on the wisdom of the crowd phenomena. In Contribution 6 a framework for evaluating the quality of different aggregation functions of Social Choice Theory is developed, which enables analysis of quality in dependence of group specific homogeneity. In a simulation study, selected aggregation functions, including an aggregation function based on the concepts of Contribution 4 and Contribution 5, are analyzed. Contribution 7 supplies a linear program that allows for detecting stochastic dominance for poset-valued random variables, gives proposals for inference and regularization, and generalizes the approach to the general task of optimizing a linear function on a closure system. The generality of the developed methods is illustrated with data examples in the context of multivariate inequality analysis, item impact and differential item functioning in the context of item response theory, analyzing distributional differences in spatial statistics and guided regularization in the context of cognitive diagnosis models. Contribution 8 uses concepts of stochastic dominance to establish a descriptive approach for a relational analysis of person ability and item difficulty in the context of multidimensional item response theory. All developed methods have been implemented in the language R ([R Development Core Team, 2014]) and are available from the author upon request. The application examples corroborate the usefulness of weak types of statistical modeling examined in this thesis, which, beyond their flexibility to deal with many kinds of data deficiency, can still lead to informative substance matter conclusions that are then more reliable due to the weak modeling.Die statistische Analyse real erhobener Daten sieht sich oft mit der Tatsache konfrontiert, dass übliche statistische Standardmethoden unter einer starken Idealisierung der Datensituation entwickelt wurden, die in der Praxis jedoch oft nicht angemessen ist. Dies betrifft i) die möglicherweise defizitäre Qualität der Daten, die beispielsweise durch Vorhandensein von Messfehlern, durch systematischen Antwortausfall im Kontext sozialwissenschaftlicher Erhebungen oder auch durch Fehler während der Datenverarbeitung bedingt ist und ii) die Skalenqualität der Daten an sich: Viele Datensituationen lassen sich nicht in die einfachen Skalenhierarchien von Stevens (oder die von Narens und Luce oder Orth) einordnen. Modernere statistische Verfahren wie beispielsweise Messfehlerkorrekturverfahren oder robuste Methoden versuchen, der Idealisierung der Datenqualität im Nachhinein Rechnung zu tragen. Im Zusammenhang mit fehlenden bzw. intervallzensierten Daten haben sich Imputationsverfahren zur Vervollständigung fehlender Werte bzw. Verfahren, die den Entstehungprozess der vergröberten Daten explizit modellieren, durchgesetzt. In Bezug auf die Skalenqualität geht die Statistik meist eher pragmatisch vor, im Zweifelsfall wird das niedrigste Skalenniveau gewählt, das klar gerechtfertigt ist. In komplexeren multivariaten Situationen, wie beispielsweise der Analyse von Ranking-Daten, die kaum noch in das Stevensche "Korsett" gezwungen werden können, bedient man sich oft der einfachen Idee der Einbettung der Daten in einen geeigneten metrischen Raum, um dann anschließend alle Werkzeuge metrischer Modellierung nutzen zu können. Vor diesem Hintergrund hat die hier vorgelegte kumulative Dissertation deshalb zum Ziel, aus verschiedenen Blickwinkeln Beiträge zum adäquaten Umgang mit Daten, die jene Idealisierungen herausfordern, zu leisten. Dabei steht hier vor allem die Analyse intervallwertiger bzw. mengenwertiger Daten mittels partieller Identifikation auf der Seite defzitärer Datenqualität im Vordergrund, während bezüglich Skalenqualität der Fall von verbandswertigen Daten behandelt wird. Als weitere Werkzeuge statistischer Modellierung werden hier insbesondere Necessity-Maße im Rahmen der Imprecise Probabilities und Konzepte stochastischer Dominanz für Zufallsvariablen mit Werten in einer partiell geordneten Menge betrachtet. Die vorliegende Dissertation umfasst 8 Beiträge, die in den folgenden Kapiteln näher diskutiert werden: Beitrag 1 analysiert verschiedene Identifikationsregionen für partiell identifizierte lineare Modelle unter intervallwertig beobachteter Responsevariable und schlägt eine neue Identifikationsregion (inklusive Schätzer) vor. Für einen Datensatz, der die Qualität von verschiedenen Rotweinen, gegeben durch ExpertInnenurteile, in Abhängigkeit von verschiedenen physikochemischen Eigenschaften beschreibt, werden Schätzungen für die Identifikationsregionen analysiert. Die Ergebnisse werden ebenfalls mit den Ergebissen klassischer Methoden für Intervalldaten verglichen. Beitrag 2 behandelt logistische Regression unter vergröberter Responsevariable, analysiert punktidentifizierende Annahmen und entwickelt likelihoodbasierte Schätzer für die entsprechenden Identifikationsregionen. Die Methode wird mit Daten einer Welle der Panelstudie "Arbeitsmarkt und Soziale Sicherung" (PASS) illustriert. Beitrag 3 analysiert die kombinatorische Struktur der Extrempunkte und der Kanten eines Polytops (sogenannte Struktur bzw. Kern einer Intervallwahrscheinlichkeit bzw. einer nicht-additiven Mengenfunktion), das von wesentlicher Bedeutung in vielen Gebieten der Imprecise Probability Theory ist. Ein effizienter Algorithmus zur Enumeration aller Extrempunkte wird ebenfalls gegeben und mit existierenden Standardenumerationsmethoden verglichen. In Beitrag 4 wird ein Quantilkonzept für verbandswertige Daten bzw. Zufallsvariablen vorgestellt. Dieses Quantilkonzept wird in Beitrag 5 auf Ranking-Daten im Zusammenhang mit einem Datensatz, der das "Weisheit der Vielen"-Phänomen untersucht, angewendet. Beitrag 6 entwickelt eine Methode zur probabilistischen Analyse der "Qualität" verschiedener Aggregationsfunktionen der Social Choice Theory. Die Analyse wird hier in Abhäangigkeit der Homogenität der betrachteten Gruppen durchgeführt. In einer simulationsbasierten Studie werden exemplarisch verschiedene klassische Aggregationsfunktionen, sowie eine neue Aggregationsfunktion basierend auf den Beiträgen 4 und 5, verglichen. Beitrag 7 stellt einen Ansatz vor, um das Vorliegen stochastischer Dominanz zwischen zwei Zufallsvariablen zu überprüfen. Der Anstaz nutzt Techniken linearer Programmierung. Weiterhin werden Vorschläge für statistische Inferenz und Regularisierung gemacht. Die Methode wird anschließend auch auf den allgemeineren Fall des Optimierens einer linearen Funktion auf einem Hüllensystem ausgeweitet. Die flexible Anwendbarkeit wird durch verschiedene Anwendungsbeispiele illustriert. Beitrag 8 nutzt Ideen stochastischer Dominanz, um Datensätze der multidimensionalen Item Response Theory relational zu analysieren, indem Paare von sich gegenseitig empirisch stützenden Fähigkeitsrelationen der Personen und Schwierigkeitsrelationen der Aufgaben entwickelt werden. Alle entwickelten Methoden wurden in R ([R Development Core Team, 2014]) implementiert. Die Anwendungsbeispiele zeigen die Flexibilität der hier betrachteten Methoden relationaler bzw. "schwacher" Modellierung insbesondere zur Behandlung defizitärer Daten und unterstreichen die Tatsache, dass auch mit Methoden schwacher Modellierung oft immer noch nichttriviale substanzwissenschaftliche Rückschlüsse möglich sind, die aufgrund der inhaltlich vorsichtigeren Modellierung dann auch sehr viel stärker belastbar sind
    • …
    corecore