7 research outputs found

    Information Inference in Scholarly Communication Infrastructures: The OpenAIREplus Project Experience

    Get PDF
    Kobos M, Bolikowski Ł, Horst M, Manghi P, Manola N, Schirrwagen J. Information Inference in Scholarly Communication Infrastructures: The OpenAIREplus Project Experience. Procedia Computer Science. 2014;38:92-99.The Information Inference Framework presented in this paper provides a general-purpose suite of tools enabling the definition and execution of flexible and reliable data processing workflows whose nodes offer application-specific processing capabilities. The IIF is designed for the purpose of processing big data, and it is implemented on top of Apache Hadoop-related technologies to cope with scalability and high-performance execution requirements. As a proof of concept we will describe how the framework is used to support linking and contextualization services in the context of the OpenAIRE infrastructure for scholarly communication

    Combination of Independent Kernel Density Estimators in Classification

    No full text
    Abstract—A new classification algorithm based on combination of two independent kernel density estimators per class is proposed. Each estimator is characterized by a different bandwidth parameter. Combination of the estimators corresponds to viewing the data with different “resolutions”. The intuition behind the method is that combining different views on the data yields a better insight into the data structure; therefore, it leads to a better classification result. The bandwidth parameters are adjusted automatically by the L-BFGS-B algorithm to minimize the cross-validation classification error. Results of experiments on benchmark data sets confirm the algorithm’s applicability. I

    Information inference in scholarly communication infrastructures: the OpenAIREplus project experience

    No full text
    The Information Inference Framework presented in this paper provides a general-purpose suite of tools enabling the definition and execution of flexible and reliable data processing workflows whose nodes offer application-specific processing capabilities. The IIF is designed for the purpose of processing big data, and it is implemented on top of Apache Hadoop-related technologies to cope with scalability and high-performance execution requirements. As a proof of concept we will describe how the framework is used to support linking and contextualization services in the context of the OpenAIRE infrastructure for scholarly communication. (C) 2014 The Authors. Published by Elsevier B.V

    Openaire2020 D6.1 - Openaire Specification And Release Plan

    No full text
    Manghi P, Bardi A, Atzori C, et al. Openaire2020 D6.1 - Openaire Specification And Release Plan.; 2015
    corecore