9,291 research outputs found

    WebPicker: Knowledge Extraction from Web Resources

    Get PDF
    We show how information distributed in several web resources and represented in different restricted languages can be extracted from its original sources and transformed into a common knowledge model represented in XML using WebPicker. This information, which has been built to cover different needs and functionalities, can be later imported into WebODE, integrated, enriched and exported into different representation formats using WebODE specific modules. We show a case study in the e-commerce domain, using products and services standards from several organizations and/or joint initiatives of industrial and services companies, and a product catalogue from an e-commerce platform

    Reverse engineering in construction

    Get PDF
    Recently a great deal of research into construction IT has been completed, and this is ongoing to improve efficiency and quality in the construction sector. The new innovation of 3D laser scanning is aimed at being used to improve the efficiency and quality of construction projects, such as maintenance of buildings or group of buildings that are going to be renovated for new services. The 3D laser scanner will be integrated with other VR tools such as GIS solutions and workbench for visualisation, analysis and interaction with a building VR model. An integration strategy is proposed for an Ordnance Survey map of the area and 3D model created by means of the laser scanner. The integrated model will then be transferred to the VR workbench in order to visualise, interact and analyse the interested buildings on purpose

    Building trainable taggers in a web-based, UIMA-supported NLP workbench

    Get PDF
    Argo is a web-based NLP and text mining workbench with a convenient graphical user interface for designing and executing processing workflows of various complexity. The workbench is intended for specialists and nontechnical audiences alike, and provides the ever expanding library of analytics compliant with the Unstructured Information Management Architecture, a widely adopted interoperability framework. We explore the flexibility of this framework by demonstrating workflows involving three processing components capable of performing self-contained machine learning-based tagging. The three components are responsible for the three distinct tasks of 1) generating observations or features, 2) training a statistical model based on the generated features, and 3) tagging unlabelled data with the model. The learning and tagging components are based on an implementation of conditional random fields (CRF); whereas the feature generation component is an analytic capable of extending basic token information to a comprehensive set of features. Users define the features of their choice directly from Argoā€™s graphical interface, without resorting to programming (a commonly used approach to feature engineering). The experimental results performed on two tagging tasks, chunking and named entity recognition, showed that a tagger with a generic set of features built in Argo is capable of competing with taskspecific solutions.

    Modelling the influence of RKIP on the ERK signalling pathway using the stochastic process algebra PEPA

    Get PDF
    This paper examines the influence of the Raf Kinase Inhibitor Protein (RKIP) on the Extracellular signal Regulated Kinase (ERK) signalling pathway [5] through modelling in a Markovian process algebra, PEPA [11]. Two models of the system are presented, a reagent-centric view and a pathway-centric view. The models capture functionality at the level of subpathway, rather than at a molecular level. Each model affords a different perspective of the pathway and analysis. We demonstrate the two models to be formally equivalent using the timing-aware bisimulation defined over PEPA models and discuss the biological significance

    A Domain-Specific Language and Editor for Parallel Particle Methods

    Full text link
    Domain-specific languages (DSLs) are of increasing importance in scientific high-performance computing to reduce development costs, raise the level of abstraction and, thus, ease scientific programming. However, designing and implementing DSLs is not an easy task, as it requires knowledge of the application domain and experience in language engineering and compilers. Consequently, many DSLs follow a weak approach using macros or text generators, which lack many of the features that make a DSL a comfortable for programmers. Some of these features---e.g., syntax highlighting, type inference, error reporting, and code completion---are easily provided by language workbenches, which combine language engineering techniques and tools in a common ecosystem. In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL and development environment for numerical simulations based on particle methods and hybrid particle-mesh methods. PPME uses the meta programming system (MPS), a projectional language workbench. PPME is the successor of the Parallel Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional implementation strategies. We analyze and compare both languages and demonstrate how the programmer's experience can be improved using static analyses and projectional editing. Furthermore, we present an explicit domain model for particle abstractions and the first formal type system for particle methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25, 201
    • ā€¦
    corecore