9 research outputs found

    State of the art review of the existing soft computing based approaches to trust and reputation computation

    Get PDF
    In this paper we present a state of the art review of PageRanktrade based approaches for trust and reputation computation. We divide the approaches that make use of PageRanktrade method for trust and reputation computation, into six different classes. Each of the six classes is discussed in this paper

    New developments on the cheminformatics open workflow environment CDK-Taverna

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The computational processing and analysis of small molecules is at heart of cheminformatics and structural bioinformatics and their application in e.g. metabolomics or drug discovery. Pipelining or workflow tools allow for the Lego™-like, graphical assembly of I/O modules and algorithms into a complex workflow which can be easily deployed, modified and tested without the hassle of implementing it into a monolithic application. The CDK-Taverna project aims at building a free open-source cheminformatics pipelining solution through combination of different open-source projects such as Taverna, the Chemistry Development Kit (CDK) or the Waikato Environment for Knowledge Analysis (WEKA). A first integrated version 1.0 of CDK-Taverna was recently released to the public.</p> <p>Results</p> <p>The CDK-Taverna project was migrated to the most up-to-date versions of its foundational software libraries with a complete re-engineering of its worker's architecture (version 2.0). 64-bit computing and multi-core usage by paralleled threads are now supported to allow for fast in-memory processing and analysis of large sets of molecules. Earlier deficiencies like workarounds for iterative data reading are removed. The combinatorial chemistry related reaction enumeration features are considerably enhanced. Additional functionality for calculating a natural product likeness score for small molecules is implemented to identify possible drug candidates. Finally the data analysis capabilities are extended with new workers that provide access to the open-source WEKA library for clustering and machine learning as well as training and test set partitioning. The new features are outlined with usage scenarios.</p> <p>Conclusions</p> <p>CDK-Taverna 2.0 as an open-source cheminformatics workflow solution matured to become a freely available and increasingly powerful tool for the biosciences. The combination of the new CDK-Taverna worker family with the already available workflows developed by a lively Taverna community and published on myexperiment.org enables molecular scientists to quickly calculate, process and analyse molecular data as typically found in e.g. today's systems biology scenarios.</p

    Modeling the Mass Function of Stellar Clusters Using the Modified Lognormal Power-Law Probability Distribution Function

    Get PDF
    We use the Modified Lognormal Power-law (MLP) probability distribution function to model the behaviour of the mass function (MF) of young and populous stellar populations in different environments. We begin by modeling the MF of NGC1711, a simple stellar population (SSP) in the Large Magellanic Cloud as a pilot case. We then use model selection criterion to differentiate between candidate models. Using the MLP we find that the stellar catalogue of NGC1711 follows a pure power-law behaviour below the completeness limit with the slope α = 2.75 for dN/dlnm ∝ m^(−α+1) in the mass range 0.89 M⊙ to 7.75 M⊙. Furthermore, we explore that the MLP takes a truncated form for fixed stopping time for accretion. By using model selection criterion, we conclude that the MLP serves as the most useful candidate to model lognormal, power-law or hybrid behaviour of the MF

    A comparison of reputation-based trust systems

    Get PDF
    Recent literature contains many examples of reputation systems which are constructed in an ad hoc way or rely upon some heuristic which has been found to be useful. However, comparisons between these reputation systems has been impossible because there are no established methods of comparing performance. This paper introduces a simulation framework which can be used to perform comparison analysis between reputation models. Two reputation models, one from Abdul-Rahman and Hailes (ARH) [1], and one from Mui, Mohtashemi, and Halberstadt (MMH) [17] are implemented and compared with regard to accuracy, performance and resistance to deception. In order to improve performance in certain cases, MMH is modified to distinguish the concept of “trust” from the concept of “reputation.” Additionally we examine the results of shortening the memory of MMH in order to improve results in environments that are constantly changing

    Development of resource allocation strategies based on cognitive radio

    Get PDF
    [no abstract

    Corpus-adaptive Named Entity Recognition

    Get PDF
    Named Entity Recognition (NER) is an important step towards the automatic analysis of natural language and is needed for a series of natural language applications. The task of NER requires the recognition and classification of proper names and other unique identifiers according to a predefined category system, e.g. the “traditional” categories PERSON, ORGANIZATION (companies, associations) and LOCATION. While most of the previous work deals with the recognition of these traditional categories within English newspaper texts, the approach presented in this thesis is beyond that scope. The approach is particularly motivated by NER which is more challenging than the classical task, such as German, or the identification of biomedical entities within scientific texts. Additionally, the approach addresses the ease-of-development and maintainability of NER-services by emphasizing the need for “corpus-adaptive” systems, with “corpus-adaptivity” describing whether a system can be easily adapted to new tasks and to new text corpora. In order to implement such a corpus-adaptive system, three design guidelines are proposed: (i) the consequent use of machine-learning techniques instead of manually created linguistic rules; (ii) a strict data-oriented modelling of the phenomena instead of a generalization based on intellectual categories; (iii) the usage of automatically extracted knowledge about Named Entities, gained by analysing large amounts of raw texts. A prototype was implemented according to these guidelines and its evaluation shows the feasibility of the approach. The system originally developed for a German newspaper corpus could easily be adapted and applied to the extraction of biomedical entities within scientific abstracts written in English and therefore gave proof of the corpus-adaptivity of the approach. Despite the limited resources in comparison with other state-of-the-art systems, the prototype scored competitive results for some of the categories

    El vínculo afecto-lenguaje-invariabilidad en el autismo infantil precoz: aporte para una redescripción y ajuste del fenómeno autístico

    Get PDF
    Fil: Bellone Cecchin, María Eugenia. Universidad Nacional de Córdoba. Facultad de Psicología; Argentina.La presente Tesis Doctoral se enmarca en el Proyecto de investigación titulado: “Epistemología e historia crítica de la clínica Psi. Segundo período”, y continúa una de las líneas abiertas por la tesis doctoral titulada “Reconstrucción racional de las teorías psicológicas y psicopatológicas de Sigmund Freud utilizando la Metodología de Programas de Investigación (MPIC)” escrita por el Dr. Juan de la Cruz Argañaraz (2014). El uso de esta metodología permitió identificar el Programa de Investigación Clínico de la Psiquiatría (PIC Cl), el cual se mantuvo vigente por muchos años hasta su estancamiento a principios de siglo XX. A partir del uso de la MPIC, la presente tesis demuestra como Leo Kanner toma las heurísticas y núcleo firme del PIC Cl para la conformación de la paidopsiquiatría estadounidense, volviéndolo progresivo –incluso cuando el campo ‘psi’ se encontraba bajo dominio del psicoanálisis–. De esta manera, Kanner se volvió un exponente –quizás el último– del PIC Cl. Este programa le permitió distinguir de la esquizofrenia infantil y el retraso mental, un cuadro clínico infantil novedoso: el autismo infantil precoz (AIP). Este ‘hecho nuevo’ para la clínica (en términos de Lakatos y Fleck) provocó un cambio de agenda en otros programas y tendencias de investigación, las cuales debieron abocarse a explicar, mediante hipótesis ad hoc, el descubrimiento kanneriano. Bajo la premisa de que detrás de cualquier supuesta batalla entre teoría y experimento, hay una lucha oculta entre dos programas de investigación (Argañaraz, 2014), se realizó una reconstrucción racional (interna) del fenómeno autista. De esta manera, se identificaron tanto, tendencias en disputa por los observables de la realidad empírica, como el ‘autismo-tipo’ que subyace a cada una de ellas. Además, esta reconstrucción aborda diferentes situaciones y confrontaciones actuales sobre el fenómeno –su designación, identificación, diagnóstico diferencial, espectralización, etc.– y cómo éstas se desprenden de una disputa entre tendencias vigentes y en pugna, que aun solapándose y contradiciendo las heurísticas de sus rivales, no logran imponerse unas a otras.Fil: Bellone Cecchin, María Eugenia. Universidad Nacional de Córdoba. Facultad de Psicología; Argentina

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework
    corecore