164 research outputs found

    Optimizing Parameters of Information-Theoretic Correlation Measurement for Multi-Channel Time-Series Datasets in Gravitational Wave Detectors

    Full text link
    Data analysis in modern science using extensive experimental and observational facilities, such as a gravitational wave detector, is essential in the search for novel scientific discoveries. Accordingly, various techniques and mathematical principles have been designed and developed to date. A recently proposed approximate correlation method based on the information theory is widely adopted in science and engineering. Although the maximal information coefficient (MIC) method remains in the phase of improving its algorithm, it is particularly beneficial in identifying the correlations of multiple noise sources in gravitational-wave detectors including non-linear effects. This study investigates various prospects for determining MIC parameters to improve the reliability of handling multi-channel time-series data, reduce high computing costs, and propose a novel method of determining optimized parameter sets for identifying noise correlations in gravitational wave data.Comment: 11 pages, 8 figure

    An economic analysis of the councils of the United Nations

    Get PDF
    This thesis consists of three pieces of research focussed on the Councils of the United Nations, predominantly the United Nations Security Council (UNSC). We consider three broad questions: which countries typically get on to the UNSC in its current form; which countries ought to get on to the UNSC; and how well might proposed changes to the UNSC steer it towards such ideals. In order to address the latter two questions it is sensible to begin by investigating how the current system works and if there are any particular characteristics which influence the chances of a country being elected to the UNSC. In Chapter 2 we develop a model to test the significance of a country’s characteristics on their probability of election to the UNSC. Chapter 3 then starts by developing a set of theoretical tests which can be applied to council voting systems, such as the selection of UNSC members from the UN General Assembly. The tests score a voting system based on how well the distribution of power in the council meets the power one would expect under a system where country representatives cast their vote in the council based on the outcomes of country or regional-level referendums. We then apply this, using the implied probabilities of election which are a consequence of the results of Chapter 2, to the UNSC election process. We then finish by applying the tests of Chapter 3, which consider how equitable a proposal is, together with a further test of procedural efficiency, to each of the proposed reforms to the UNSC election process

    Quantificação na pesquisa etnobotùnica: um panorama sobre os índices usados de 1995 a 2009

    Get PDF
    Over the last few decades, local knowledge has begun to be studied by ethnobotanists using quantitativeanalyses to assess the relationship between biological and cultural diversity, and the relative importance of naturalresources for the local population. A considerable number of published articles have proposed these quantitative analyses,necessitating discussion and analysis of the commonly employed quantitative techniques. This study examines twocentral issues: the nature of quantitative research in ethnobotany and the use of quantitative indices in ethnobotanicalresearch. A literature review was completed consisting of books, reviews, articles and editorials in the main internationalperiodicals in the areas of ethnobiology and ethnoecology. Scientific search sites were consulted, and a database wascompiled and analyzed. The analysis of 64 papers and four books constituted the basis for this work. The United Statesproduce the greatest number of publications in journals in this field (65%). A total of 87 different quantitative techniqueswas recorded. This work does not claim to provide a census of all the publications on the subject, but rather intends topresent a panorama on the current state of quantification in ethnobotany.A forma pelaqual o conhecimento local passou a ser pesquisado pelos etnobotânicos incorporou, nas últimas décadas, instrumentos deanálise quantitativa usados para estimar a relação entre diversidade biológica e cultural, bem como a importância derecursos naturais para as populações locais. Desde então, um considerável número de artigos publicados propõem análisesquantitativas. Em decorrência desta adesão por parte dos pesquisadores, se faz necessário uma discussão e análise dastécnicas usualmente empregadas. Em vista disto, este estudo tem como base o desenvolvimento de duas ideias centrais: anatureza das pesquisas quantitativas em Etnobotânica e o emprego de índices quantitativos nas pesquisas Etnobotânicas.Com este propósito de descortinar o cenário da quantificação na Etnobotânica, através da análise dos índices quantitativosamplamente difundidos nestas produções, realizou-se uma revisão da literatura para se detectar pesquisas em Etnobotânicaque tenham aplicado índices quantitativos. Assim, neste trabalho, são discutidos os tipos de informações que podem serobtidas por meio da análise de um acervo constituído por livros, revisões, artigos e editoriais de algumas das principaisrevistas científicas na área de Etnobiologia e Etnoecologia. O material bibliográfico foi localizado por meio de consultas asites de busca científica. As informações integraram um banco de dados e foram analisadas com base em 64 artigos e quatrolivros. Os Estados Unidos concentraram 65% das publicações em periódicos. Ao todo, 87 técnicas quantitativas diferentesforam identificadas. Este trabalho tem como intuito final abrir espaço para uma reflexão acerca do que vem sendo realizadoem termos de quantificação na etnobotânica

    Workload Equity in Vehicle Routing Problems: A Survey and Analysis

    Full text link
    Over the past two decades, equity aspects have been considered in a growing number of models and methods for vehicle routing problems (VRPs). Equity concerns most often relate to fairly allocating workloads and to balancing the utilization of resources, and many practical applications have been reported in the literature. However, there has been only limited discussion about how workload equity should be modeled in VRPs, and various measures for optimizing such objectives have been proposed and implemented without a critical evaluation of their respective merits and consequences. This article addresses this gap with an analysis of classical and alternative equity functions for biobjective VRP models. In our survey, we review and categorize the existing literature on equitable VRPs. In the analysis, we identify a set of axiomatic properties that an ideal equity measure should satisfy, collect six common measures, and point out important connections between their properties and those of the resulting Pareto-optimal solutions. To gauge the extent of these implications, we also conduct a numerical study on small biobjective VRP instances solvable to optimality. Our study reveals two undesirable consequences when optimizing equity with nonmonotonic functions: Pareto-optimal solutions can consist of non-TSP-optimal tours, and even if all tours are TSP optimal, Pareto-optimal solutions can be workload inconsistent, i.e. composed of tours whose workloads are all equal to or longer than those of other Pareto-optimal solutions. We show that the extent of these phenomena should not be underestimated. The results of our biobjective analysis are valid also for weighted sum, constraint-based, or single-objective models. Based on this analysis, we conclude that monotonic equity functions are more appropriate for certain types of VRP models, and suggest promising avenues for further research.Comment: Accepted Manuscrip

    Theories of Fairness and Reciprocity

    Get PDF
    Most economic models are based on the self-interest hypothesis that assumes that all people are exclusively motivated by their material self-interest. In recent years experimental economists have gathered overwhelming evidence that systematically refutes the self-interest hypothesis and suggests that many people are strongly motivated by concerns for fairness and reciprocity. Moreover, several theoretical papers have been written showing that the observed phenomena can be explained in a rigorous and tractable manner. These theories in turn induced a new wave of experimental research offering additional exciting insights into the nature of preferences and into the relative performance of competing theories of fairness. The purpose of this paper is to review these recent developments, to point out open questions, and to suggest avenues for future research

    Theories of Fairness and Reciprocity

    Get PDF
    Most economic models are based on the self-interest hypothesis that assumes that all people are exclusively motivated by their material self-interest. In recent years experimental economists have gathered overwhelming evidence that systematically refutes the self-interest hypothesis and suggests that many people are strongly motivated by concerns for fairness and reciprocity. Moreover, several theoretical papers have been written showing that the observed phenomena can be explained in a rigorous and tractable manner. These theories in turn induced a new wave of experimental research offering additional exciting insights into the nature of preferences and into the relative performance of competing theories of fairness. The purpose of this paper is to review these recent developments, to point out open questions, and to suggest avenues for future research.Behavioral Economics ; Fairness ; Reciprocity ; Altruism ; Experiments ; Incentives ; Contracts ; Competition

    Theories of Fairness and Reciprocity – Evidence and Economic Applications

    Get PDF
    Most economic models are based on the self-interest hypothesis that assumes that all people are exclusively motivated by their material self-interest. In recent years experimental economists have gathered overwhelming evidence that systematically refutes the self-interest hypothesis and suggests that many people are strongly motivated by concerns for fairness and reciprocity. Moreover, several theoretical papers have been written showing that the observed phenomena can be explained in a rigorous and tractable manner. These theories in turn induced a new wave of experimental research offering additional exciting insights into the nature of preferences and into the relative performance of competing theories of fairness. The purpose of this paper is to review these recent developments, to point out open questions, and to suggest avenues for future research.Behavioral economics, fairness, reciprocity, altruism, experiments, incentives, contracts, competition

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science
    • 

    corecore