3,067 research outputs found

    Improving the Performance of Low Voltage Networks by an Optimized Unbalance Operation of Three-Phase Distributed Generators

    Get PDF
    This work focuses on using the full potential of PV inverters in order to improve the efficiency of low voltage networks. More specifically, the independent per-phase control capability of PV three-phase four-wire inverters, which are able to inject different active and reactive powers in each phase, in order to reduce the system phase unbalance is considered. This new operational procedure is analyzed by raising an optimization problem which uses a very accurate modelling of European low voltage networks. The paper includes a comprehensive quantitative comparison of the proposed strategy with two state-of-the-art methodologies to highlight the obtained benefits. The achieved results evidence that the proposed independent per-phase control of three-phase PV inverters improves considerably the network performance contributing to increase the penetration of renewable energy sources.Ministerio de Economía y Competitividad ENE2017-84813-R, ENE2014-54115-

    La España transterrada de Rafael Alberti

    Get PDF

    An alternative measurement of the entropy evolution of a genetic algorithm

    Full text link
    This is an electronic version of the paper presented at The European Simulation and Modelling Conference (ESM), held in Leicester (United Kingdom) on 2009In a genetic algorithm, fluctuations of the entropy of a genome over time are interpreted as fluctuations of the information that the genome’s organism is storing about its environment, being this reflected in more complex organisms. The computation of this entropy presents technical problems due to the small population sizes used in practice. In this work we propose and test an alternative way of measuring the entropy variation in a population by means of algorithmic information theory, where the entropy variation between two generational steps is the Kolmogorov complexity of the first step conditioned to the second one. We also report experimental differences in entropy evolution between systems in which sexual reproduction is present or absent.This work has been partially sponsored by MICINN, project TIN2008-02081/TIN and by DGUI CAM/UAM, project CCG08-UAM/TIC-4425

    Automatic generation of benchmarks for plagiarism detection tools using grammatical evolution

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in {Source Publication}, http://dx.doi.org/10.1145/10.1145/1276958.1277388An extended version of this poster is available at arXiv‘. See: http://arxiv.org/abs/cs/0703134v4Student plagiarism is a major problem in universities worldwide. In this paper, we focus on plagiarism in answers to computer programming assignments, where students mix and/or modify one or more original solutions to obtain counterfeits. Although several software tools have been developed to help the tedious and time consuming task of detecting plagiarism, little has been done to assess their quality, because determining the real authorship of the whole submission corpus is practically impossible for graders. In this article we present a Grammatical Evolution technique which generates benchmarks for testing plagiarism detection tools. Given a programming language, our technique generates a set of original solutions to an assignment, together with a set of plagiarisms of the former set which mimic the basic plagiarism techniques performed by students. The authorship of the submission corpus is predefined by the user, providing a base for the assessment and further comparison of copy-catching tools. We give empirical evidence of the suitability of our approach by studying the behavior of one state-of-the-art detection tool (AC) on four benchmarks coded in APL2, generated with our technique.Work supported by grant TSI2005-08255-C07-06 of the Spanish Ministry of Education and Science

    Common Pitfalls Using the Normalized Compression Distance: What to Watch Out for in a Compressor

    Full text link
    Using the mathematical background for algorithmic complexity developed by Kolmogorov in the sixties, Cilibrasi and Vitanyi have designed a similarity distance named normalized compression distance applicable to the clustering of objects of any kind, such as music, texts or gene sequences. The normalized compression distance is a quasi-universal normalized admissible distance under certain conditions. This paper shows that the compressors used to compute the normalized compression distance are not idempotent in some cases, being strongly skewed with the size of the objects and window size, and therefore causing a deviation in the identity property of the distance if we don't take care that the objects to be compressed fit the windows. The relationship underlying the precision of the distance and the size of the objects has been analyzed for several well-known compressors, and specially in depth for three cases, bzip2, gzip and PPMZ which are examples of the three main types of compressors: block-sorting, Lempel-Ziv, and statistic.This work was partially supported by grant TSI 2005- 08255-C07-06 of the Spanish Ministry of Education and Science
    • …
    corecore