89,279 research outputs found

    DISTANCE MEASURES IN AGGREGATING PREFERENCE DATA

    Get PDF
    The aim of this paper is to present aggregation methods of individual preferences scores by means of distance measures. Three groups of distance measures are discussed: measures  which use preference distributions for all pairs of objects (e.g. Kemeny’s measure, Bogart’s measure), distance measures based on ranking data (e.g. Spearman distance, Podani distance) and distance measures using permissible transformations to ordinal scale (GDM2 distance). Adequate distance formulas are presented and the aggregation of individual preference by using separate distance measures was carried out with the use of the R program

    Adaptation and implementation of a process of innovation and design within a SME

    Get PDF
    A design process is a sequence of design phases, starting with the design requirement and leading to a definition of one or several system architectures. For every design phase, various support tools and resolution methods are proposed in the literature. These tools are however very difficult to implement in an SME, which may often lack resources. In this article we propose a complete design process for new manufacturing techniques, based on creativity and knowledge re-use in searching for technical solutions. Conscious of the difficulties of appropriation in SME, for every phase of our design process we propose resolution tools which are adapted to the context of a small firm. Design knowledge has been capitalized in a knowledge base. The knowledge structuring we propose is based on functional logic and the design process too is based on the functional decomposition of the system, and integrates the simplification of the system architecture, from the early phases of the process. For this purpose, aggregation phases and embodiment are proposed and guided by heuristics

    Integrating spatial indicators in the surveillance of exploited marine ecosystems

    Get PDF
    Spatial indicators are used to quantify the state of species and ecosystem status, that is the impacts of climate and anthropogenic changes, as well as to comprehend species ecology. These metrics are thus, determinant to the stakeholder's decisions on the conservation measures to be implemented. A detailed review of the literature (55 papers) showed that 18 spatial indicators were commonly used in marine ecology. Those indicators were than characterized and studied in detail, based on its application to empirical data (a time series of 35 marine species spatial distributions, sampled either with a random stratified survey or a regular transects surveys). The results suggest that the indicators can be grouped into three classes, that summarize the way the individuals occupy space: occupancy (the area occupied by a species), aggregation (spreading or concentration of species biomass) and quantity dependent (indicators correlated with biomass), whether these are spatially explicit (include the geographic coordinates, e.g. center of gravity) or not. Indicator's temporal variability was lower than between species variability and no clear effect was observed in relation to sampling design. Species were then classified accordingly to their indicators. One indicator was selected from each of the three categories of indicators, to represent the main axes of species spatial behavior and to interpret them in terms of occupancy-aggregation-quantity relationships. All species considered were then classified according to their relationships among those three axes, into species that under increasing abundancy, primarily increase occupancy or aggregation or both. We suggest to use these relationships along the three-axes as surveillance diagrams to follow the yearly evolution of species distributional patterns in the future.MSFD from Franceinfo:eu-repo/semantics/publishedVersio

    Sharing the Burden of Collective Security in the European Union. Research Note

    Get PDF
    This article compares European Union (EU) burden-sharing in security governance distinguishing between assurance, prevention, protection, and compellence policies. We employ joint-product models and examine the variation in the level of publicness, the asymmetry of the distribution of costs and benefits, and aggregation technologies in each policy domain. Joint-product models predict equal burden sharing for protection and assurance because of their respective weakest-link and summation aggregation technologies with symmetric costs. Prevention is also characterized by the technology of summation, but asymmetry of costs implies uneven burden-sharing. Uneven burden-sharing is predicted for compellence because it has the largest asymmetry of costs and a best-shot aggregation technology. Evaluating burden-sharing relative to a country?s ability to contribute, Kendall tau-tests examine the rank-correlation between security burden and the capacity of EU member states. These tests show that the smaller EU members disproportionately shoulder the costs of assurance and protection; wealthier EU members carry a somewhat disproportionate burden in the provision of prevention, and larger EU members in the provision of compellence. When analyzing contributions relative to expected benefits, asymmetric marginal costs can largely explain uneven burden-sharing. The main conclusion is that the aggregated burden of collective security governance in the EU is shared quite evenly

    Learning Reputation in an Authorship Network

    Full text link
    The problem of searching for experts in a given academic field is hugely important in both industry and academia. We study exactly this issue with respect to a database of authors and their publications. The idea is to use Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform topic modelling in order to find authors who have worked in a query field. We then construct a coauthorship graph and motivate the use of influence maximisation and a variety of graph centrality measures to obtain a ranked list of experts. The ranked lists are further improved using a Markov Chain-based rank aggregation approach. The complete method is readily scalable to large datasets. To demonstrate the efficacy of the approach we report on an extensive set of computational simulations using the Arnetminer dataset. An improvement in mean average precision is demonstrated over the baseline case of simply using the order of authors found by the topic models

    Semi-Analytical Models for the Formation of Disk Galaxies: I. Constraints from the Tully-Fisher Relation

    Full text link
    We present new semi-analytical models for the formation of disk galaxies with the purpose of investigating the origin of the near-infrared Tully-Fisher (TF) relation. The models assume that disks are formed by cooling of the baryons inside dark halos with realistic density profiles, and that the baryons conserve their specific angular momentum. Only gas with densities above the critical density given by Toomre's stability criterion is considered eligible for star formation, and a simple recipe for supernovae feedback is included. We emphasize the importance of extracting the proper luminosity and velocity measures from the models, something that has often been ignored in the past. The observed K-band TF relation has a slope that is steeper than simple predictions based on dynamical arguments suggest. Taking the stability related star formation threshold densities into account steepens the TF relation, decreases its scatter, and yields gas mass fractions that are in excellent agreement with observations. In order for the TF slope to be as steep as observed, further physics are required. We argue that the characteristics of the observed near-infrared TF relation do not reflect systematic variations in stellar populations, or cosmological initial conditions, but are governed by feedback. Finally we show that our models provide a natural explanation for the small amount of scatter that makes the TF relation useful as a cosmological distance indicator.Comment: 20 pages, 10 figures. Accepted for publication in Ap

    Measuring real value and inflation

    Get PDF
    The most important economic measures are monetary. They have many different names, are derived in different theories and employ different formulas. Yet, they all attempt to do basically the same thing: to separate a change in nominal value into a ‘real part’ due to the changes in quantities and an inflation due to the changes of prices. Examples are: real national product and its components, the GNP deflator, the CPI, various measures related to consumer surplus, as well as the large number of formulas for price and quantity indexes that have been proposed. The theories that have been developed to derive these measures are largely unsatisfactory. The axiomatic theory of indexes does not make clear which economic problem a particular formula can be used to solve. The economic theories are for the most part based on unrealistic assumption. For example, the theory of the CPI is usually developed for a single consumer with homothetic preferences and then applied to a large aggregate of diverse consumers with non-homothetic preferences. In this paper I develop a unitary theory that can be used in all situations in which monetary measures have been used. The theory implies a uniquely optimal measure which turns out to be the Törnqvist index. I review, and partly re-interpret the derivations of this index in the literature and provide several new derivations. The paper also covers several related topics, particularly the presently unsatisfactory determination of the components of real GDP
    corecore