182 research outputs found

    Towards a flexible open-source software library for multi-layered scholarly textual studies: An Arabic case study dealing with semi-automatic language processing

    Get PDF
    This paper presents both the general model and a case study of the Computational and Collaborative Philology Library (CoPhiLib), an ongoing initiative underway at the Institute for Computational Linguistics (ILC) of the National Research Council (CNR), Pisa, Italy. The library, designed and organized as a reusable, abstract and open-source software component, aims at solving the needs of multi-lingual and cross-lingual analysis by exposing common Application Programming Interfaces (APIs). The core modules, coded by the Java programming language, constitute the groundwork of a Web platform designed to deal with textual scholarly needs. The Web application, implemented according to the Java Enterprise specifications, focuses on multi-layered analysis for the study of literary documents and related multimedia sources. This ambitious challenge seeks to obtain the management of textual resources, on the one hand by abstracting from current language, on the other hand by decoupling from the specific requirements of single projects. This goal is achieved thanks to methodologies declared by the 'agile process', and by putting into effect suitable use case modeling, design patterns, and component-based architectures. The reusability and flexibility of the system have been tested on an Arabic case study: the system allows users to choose the morphological engine (such as AraMorph or Al-Khalil), along with linguistic granularity (i.e. with or without declension). Finally, the application enables the construction of annotated resources for further statistical engines (training set). © 2014 IEEE

    Towards a public analysis database for LHC new physics searches using MadAnalysis 5

    Get PDF
    We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.Comment: 20 pages, 15 figures, 5 recast codes; version accepted by EPJC (Dec 22, 2014) including a new section with guidelines for the experimental collaborations as well as for potential contributors to the PAD; complementary information can be found at http://madanalysis.irmp.ucl.ac.be/wiki/PhysicsAnalysisDatabas

    DFKI finite-state machine toolkit

    Get PDF
    Finite-state devices such as finite-state automata and finite-state transducers have been known since the emergence of computer science and are recently extensively used in many areas of language technology. The use of finite-state devices is mainly motivated by their time and space efficiency. In this paper we present the Finite-State Machine Toolkit for building, combining and optimizing the finite-state machines, developed at the Language Technology Lab of the German Research Center for Artificial Intelligence

    Integrated Framework for Interaction and Annotation of Multimodal Data

    Get PDF
    Ahmed, Afroza. MS. The University of Memphis. August 2010. Integrated Framework for Interaction and Annotation of Multimodal Data. Major Professor: Mohammed Yeasin, Ph.D. This thesis aims to develop an integrated framework and intuitive user-interface to interact, annotate, and analyze multimodal data (i.e., video, image, audio, and text data). The proposed framework has three layers: (i) interaction, (ii) annotation, and (iii) analysis or modeling. These three layers are seamlessly wrapped together using an user-friendly interface designed based on proven principles from the industry practices. The key objective is to facilitate the interaction with multimodal data at various levels of granularities. In particular, the proposed framework allows interaction with the multimodal data in three levels: (i) raw level, (ii) feature level, and (iii) semantic level. The main function of the proposed framework is to provide an efficient way to annotate the raw multimodal data to create proper ground truth meta data. The annotated data is used for visual analysis, co-analysis, and modeling of underlying concepts, such as dialog acts, continuous gestures, and spontaneous emotions. The key challenge is to integrate codes(computer programs) written using different programming languages and platforms, displaying the results, and multimodal data in one platform. This fully integrated tool achieved the stated goals and objective and is a valuable addition to the list of very few existing tools that are useful for interaction, annotation, and analysis of multimodal data

    Improving Usability And Scalability Of Big Data Workflows In The Cloud

    Get PDF
    Big data workflows have recently emerged as the next generation of data-centric workflow technologies to address the five “V” challenges of big data: volume, variety, velocity, veracity, and value. More formally, a big data workflow is the computerized modeling and automation of a process consisting of a set of computational tasks and their data interdependencies to process and analyze data of ever increasing in scale, complexity, and rate of acquisition. The convergence of big data and workflows creates new challenges in workflow community. First, the variety of big data results in a need for integrating large number of remote Web services and other heterogeneous task components that can consume and produce data in various formats and models into a uniform and interoperable workflow. Existing approaches fall short in addressing the so-called shimming problem only in an adhoc manner and unable to provide a generic solution. We automatically insert a piece of code called shims or adaptors in order to resolve the data type mismatches. Second, the volume of big data results in a large number of datasets that needs to be queried and analyzed in an effective and personalized manner. Further, there is also a strong need for sharing, reusing, and repurposing existing tasks and workflows across different users and institutes. To overcome such limitations, we propose a folksonomy- based social workflow recommendation system to improve workflow design productivity and efficient dataset querying and analyzing. Third, the volume of big data results in the need to process and analyze data of ever increasing in scale, complexity, and rate of acquisition. But a scalable distributed data model is still missing that abstracts and automates data distribution, parallelism, and scalable processing. We propose a NoSQL collectional data model that addresses this limitation. Finally, the volume of big data combined with the unbound resource leasing capability foreseen in the cloud, facilitates data scientists to wring actionable insights from the data in a time and cost efficient manner. We propose BARENTS scheduler that supports high-performance workflow scheduling in a heterogeneous cloud-computing environment with a single objective to minimize the workflow makespan under a user provided budget constraint

    ColliderBit: a GAMBIT module for the calculation of high-energy collider observables and likelihoods

    Get PDF
    We describe ColliderBit, a new code for the calculation of high energy collider observables in theories of physics beyond the Standard Model (BSM). ColliderBit features a generic interface to BSM models, a unique parallelised Monte Carlo event generation scheme suitable for large-scale supercomputer applications, and a number of LHC analyses, covering a reasonable range of the BSM signatures currently sought by ATLAS and CMS. ColliderBit also calculates likelihoods for Higgs sector observables, and LEP searches for BSM particles. These features are provided by a combination of new code unique to ColliderBit, and interfaces to existing state-of-the-art public codes. ColliderBit is both an important part of the GAMBIT framework for BSM inference, and a standalone tool for efficiently applying collider constraints to theories of new physics

    ColliderBit: a GAMBIT module for the calculation of high-energy collider observables and likelihoods

    Get PDF
    We describe ColliderBit, a new code for the calculation of high energy collider observables in theories of physics beyond the Standard Model (BSM). ColliderBit features a generic interface to BSM models, a unique parallelised Monte Carlo event generation scheme suitable for large-scale supercomputer applications, and a number of LHC analyses, covering a reasonable range of the BSM signatures currently sought by ATLAS and CMS. ColliderBit also calculates likelihoods for Higgs sector observables, and LEP searches for BSM particles. These features are provided by a combination of new code unique to ColliderBit, and interfaces to existing state-of-the-art public codes. ColliderBit is both an important part of the GAMBIT framework for BSM inference, and a standalone tool for efficiently applying collider constraints to theories of new physics
    corecore