475 research outputs found

    Comparison of the h-index for different fields of research using bootstrap methodology

    Get PDF
    An important disadvantage of the h-index is that typically it cannot take into account the specific field of research of a researcher. Usually sample point estimates of the average and median h-index values for the various fields are reported that are highly variable and dependent of the specific samples and it would be useful to provide confidence intervals of prediction accuracy. In this paper we apply the non-parametric bootstrap technique for constructing confidence intervals for the h-index for different fields of research. In this way no specific assumptions about the distribution of the empirical h-index are required as well as no large samples since that the methodology is based on resampling from the initial sample. The results of the analysis showed important differences between the various fields. The performance of the bootstrap intervals for the mean and median h-index for most fields seems to be rather satisfactory as revealed by the performed simulation

    REDI for Binned Data: A Random Empirical Distribution Imputation Method for Estimating Continuous Incomes

    Get PDF
    Researchers often need to work with categorical income data. The typical nonparametric (including midpoint) and parametric estimation methods used to estimate summary statistics both have advantages, but they carry assumptions that cause them to deviate in important ways from real-world income distributions. The method introduced here, random empirical distribution imputation (REDI), imputes discrete observations using binned income data, while also calculating summary statistics. REDI achieves this through random cold-deck imputation from a real-world reference data set (demonstrated here using the Current Population Survey Annual Social and Economic Supplement). This method can be used to reconcile bins between data sets or across years and handle top incomes. REDI has other advantages for computing values of an income distribution that is nonparametric, bin consistent, area and variance preserving, continuous, and computationally fast. The author provides proof of concept using two years of the American Community Survey. The method is available as the redi command for Stata

    Rating Curve Development And Multivariate Statistical Analyses Of Stream Water Quality In Greensboro, North Carolina

    Get PDF
    A suite of regression models were tested for the construction of rating curves and constituent load estimation for 17 water quality parameters monitored at 16 stations regularly since 1999 by the City of Greensboro in North Carolina. Best models were selected based on the statistical evaluation within the framework of the LOAD ESTimator (LOADEST) model. The constituent prediction varied from the “true load†by –6% to 16% for Nitrate; -14% to +12% for Nitrite; -6% to 0% for Total Dissolved Solids (TDS); -2% to 9% for Total Kjeldahl Nitrogen (TKN); -22% to 9% for Total Phosphorus (TP); and -51% to 23% for Total Suspended Solids (TSS). There was a systematic bias towards under-prediction for TDS, TP and TSS whereas nitrate and TKN were over predicted and none for Nitrite. The predicted loads were compared with five interpolation methods (M1, M2, M3, M4 and M5) in the following pattern: for nitrate, TDS and TSS, load estimated by M3, M4 and M5 \u3e LOADEST \u3e M1 and M2; for nitrite, TKN and TP: LOADEST \u3e M3, M4 and M5 \u3e M1 and M2. Multivariate analyses used cluster analysis (CA), factor analysis (FA) and principal component analysis (PCA) on all parameters at all stations. CA grouped the water quality station into four spatially similar clusters. PCA/FA was applied on the entire dataset of entire watershed and spatially similar stations. Combination of FA/PCA and CA reduced the size of the dataset by 71% and represented the 64% of the total variance

    Vol. 15, No. 1 (Full Issue)

    Get PDF

    Scalarized Preferences in Multi-objective Optimization

    Get PDF
    Multikriterielle Optimierungsprobleme verfügen über keine Lösung, die optimal in jeder Zielfunktion ist. Die Schwierigkeit solcher Probleme liegt darin eine Kompromisslösung zu finden, die den Präferenzen des Entscheiders genügen, der den Kompromiss implementiert. Skalarisierung – die Abbildung des Vektors der Zielfunktionswerte auf eine reelle Zahl – identifiziert eine einzige Lösung als globales Präferenzenoptimum um diese Probleme zu lösen. Allerdings generieren Skalarisierungsmethoden keine zusätzlichen Informationen über andere Kompromisslösungen, die die Präferenzen des Entscheiders bezüglich des globalen Optimums verändern könnten. Um dieses Problem anzugehen stellt diese Dissertation eine theoretische und algorithmische Analyse skalarisierter Präferenzen bereit. Die theoretische Analyse besteht aus der Entwicklung eines Ordnungsrahmens, der Präferenzen als Problemtransformationen charakterisiert, die präferierte Untermengen der Paretofront definieren. Skalarisierung wird als Transformation der Zielmenge in diesem Ordnungsrahmen dargestellt. Des Weiteren werden Axiome vorgeschlagen, die wünschenswerte Eigenschaften von Skalarisierungsfunktionen darstellen. Es wird gezeigt unter welchen Bedingungen existierende Skalarisierungsfunktionen diese Axiome erfüllen. Die algorithmische Analyse kennzeichnet Präferenzen anhand des Resultats, das ein Optimierungsalgorithmus generiert. Zwei neue Paradigmen werden innerhalb dieser Analyse identifiziert. Für beide Paradigmen werden Algorithmen entworfen, die skalarisierte Präferenzeninformationen verwenden: Präferenzen-verzerrte Paretofrontapproximationen verteilen Punkte über die gesamte Paretofront, fokussieren aber mehr Punkte in Regionen mit besseren Skalarisierungswerten; multimodale Präferenzenoptima sind Punkte, die lokale Skalarisierungsoptima im Zielraum darstellen. Ein Drei-Stufen-Algorith\-mus wird entwickelt, der lokale Skalarisierungsoptima approximiert und verschiedene Methoden werden für die unterschiedlichen Stufen evaluiert. Zwei Realweltprobleme werden vorgestellt, die die Nützlichkeit der beiden Algorithmen illustrieren. Das erste Problem besteht darin Fahrpläne für ein Blockheizkraftwerk zu finden, die die erzeugte Elektrizität und Wärme maximieren und den Kraftstoffverbrauch minimiert. Präferenzen-verzerrte Approximationen generieren mehr Energie-effiziente Lösungen, unter denen der Entscheider seine favorisierte Lösung auswählen kann, indem er die Konflikte zwischen den drei Zielen abwägt. Das zweite Problem beschäftigt sich mit der Erstellung von Fahrplänen für Geräte in einem Wohngebäude, so dass Energiekosten, Kohlenstoffdioxidemissionen und thermisches Unbehagen minimiert werden. Es wird gezeigt, dass lokale Skalarisierungsoptima Fahrpläne darstellen, die eine gute Balance zwischen den drei Zielen bieten. Die Analyse und die Experimente, die in dieser Arbeit vorgestellt werden, ermöglichen es Entscheidern bessere Entscheidungen zu treffen indem Methoden angewendet werden, die mehr Optionen generieren, die mit den Präferenzen der Entscheider übereinstimmen

    Design and optimization under uncertainty of Energy Systems

    Get PDF
    In many engineering design and optimisation problems, the presence of uncertainty in data and parameters is a central and critical issue. The analysis and design of advanced complex energy systems is generally performed starting from a single operating condition and assuming a series of design and operating parameters as fixed values. However, many of the variables on which the design is based are subject to uncertainty because they are not determinable with an adequate precision and they can affect both performance and cost. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any system used in engineering is subject to some degree of uncertainty. Different fields of engineering use different ways to describe this uncertainty and adopt a variety of techniques to approach the problem. The past decade has seen a significant growth of research and development in uncertainty quantification methods to analyse the propagation of uncertain inputs through the systems. One of the main challenges in this field are identifying sources of uncertainty that potentially affect the outcomes and the efficiency in propagating these uncertainties from the sources to the quantities of interest, especially when there are many sources of uncertainties. Hence, the level of rigor in uncertainty analysis depends on the quality of uncertainty quantification method. The main obstacle of this analysis is often the computational effort, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation through a non-intrusive approach with as few evaluations as possible. The primary goal of this work is to show a robust method for uncertainty quantification applied to energy systems. The first step in this direction was made doing a work on the analysis of uncertainties on a recuperator for micro gas turbines, making use of the Monte Carlo and Response Sensitivity Analysis methodologies to perform this study. However, when considering more complex energy systems, one of the main weaknesses of uncertainty quantification methods arises: the extremely high computational effort needed. For this reason, the application of a so-called metamodel was found necessary and useful. This approach was applied to perform a complete analysis under uncertainty of a solid oxide fuel cell hybrid system, starting from the evaluation of the impact of several uncertainties on the system up to a robust design including a multi-objective optimization. The response surfaces have allowed the authors to consider the uncertainties in the system when performing an acceptable number of simulations. These response were then used to perform a Monte Carlo simulation to evaluate the impact of the uncertainties on the monitored outputs, giving an insight on the spread of the resulting probability density functions and so on the outputs which should be considered more carefully during the design phase. Finally, the analysis of a complex combined cycle with a flue gas condesing heat pump subject to market uncertainties was performed. To consider the uncertainties in the electrical price, which would impact directly the revenues of the system, a statistical study on the behaviour of such price along the years was performed. From the data obtained it was possible to create a probability density function for each hour of the day which would represent its behaviour, and then those distributions were used to analyze the variability of the system in terms of revenues and emissions

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
    • …
    corecore