12,877 research outputs found

    An extensible manufacturing resource model for process integration

    Get PDF
    Driven by industrial needs and enabled by process technology and information technology, enterprise integration is rapidly shifting from information integration to process integration to improve overall performance of enterprises. Traditional resource models are established based on the needs of individual applications. They cannot effectively serve process integration which needs resources to be represented in a unified, comprehensive and flexible way to meet the needs of various applications for different business processes. This paper looks into this issue and presents a configurable and extensible resource model which can be rapidly reconfigured and extended to serve for different applications. To achieve generality, the presented resource model is established from macro level and micro level. A semantic representation method is developed to improve the flexibility and extensibility of the model

    Apache Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources

    Get PDF
    Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.Comment: SIGMOD'1

    An Open Framework for Integrating Widely Distributed Hypermedia Resources

    No full text
    The success of the WWW has served as an illustration of how hypermedia functionality can enhance access to large amounts of distributed information. However, the WWW and many other distributed hypermedia systems offer very simple forms of hypermedia functionality which are not easily applied to existing applications and data formats, and cannot easily incorporate alternative functions which would aid hypermedia navigation to and from existing documents that have not been developed with hypermedia access in mind. This paper describes the extension to a distributed environment of the open hypermedia functionality of the Microcosm system, which is designed to support the provision of hypermedia access to a wide range of source material and application, and to offer straightforward extension of the system to incorporate new forms of information access

    An extensible product structure model for product lifecycle management in the make-to-order environment

    Get PDF
    This paper presents a product structure model with a semantic representation technique that make the product structure extensible for developing product lifecycle management (PLM) systems that is flexible for make-to-order environment. In the make-to-order business context, each product could have a number of variants with slightly different constitutions to fulfill different customer requirements. All the variants of a family have common characteristics and each variant has its specific features. A master-variant pattern is proposed for building the product structure model to explicitly represent common characteristics and specific features of individual variants. The model is capable of enforcing the consistency of a family structure and its variant structure, supporting multiple product views, and facilitating the business processes. A semantic representation technique is developed that enables entity attributes to be defined and entities to be categorized in a neutral and semantic format. As a result, entity attributes and entity categorization can be redefined easily with its configurable capability for different requirements of the PLM systems. An XML-based language is developed for semantically representing entities and entity categories. A prototype as a proof-of-concept system is presented to illustrate the capability of the proposed extensible product structure model

    A flexible architecture for privacy-aware trust management

    Get PDF
    In service-oriented systems a constellation of services cooperate, sharing potentially sensitive information and responsibilities. Cooperation is only possible if the different participants trust each other. As trust may depend on many different factors, in a flexible framework for Trust Management (TM) trust must be computed by combining different types of information. In this paper we describe the TAS3 TM framework which integrates independent TM systems into a single trust decision point. The TM framework supports intricate combinations whilst still remaining easily extensible. It also provides a unified trust evaluation interface to the (authorization framework of the) services. We demonstrate the flexibility of the approach by integrating three distinct TM paradigms: reputation-based TM, credential-based TM, and Key Performance Indicator TM. Finally, we discuss privacy concerns in TM systems and the directions to be taken for the definition of a privacy-friendly TM architecture.\u

    Ellogon: A New Text Engineering Platform

    Full text link
    This paper presents Ellogon, a multi-lingual, cross-platform, general-purpose text engineering environment. Ellogon was designed in order to aid both researchers in natural language processing, as well as companies that produce language engineering systems for the end-user. Ellogon provides a powerful TIPSTER-based infrastructure for managing, storing and exchanging textual data, embedding and managing text processing components as well as visualising textual data and their associated linguistic information. Among its key features are full Unicode support, an extensive multi-lingual graphical user interface, its modular architecture and the reduced hardware requirements.Comment: 7 pages, 9 figures. Will be presented to the Third International Conference on Language Resources and Evaluation - LREC 200

    A storage and access architecture for efficient query processing in spatial database systems

    Get PDF
    Due to the high complexity of objects and queries and also due to extremely large data volumes, geographic database systems impose stringent requirements on their storage and access architecture with respect to efficient query processing. Performance improving concepts such as spatial storage and access structures, approximations, object decompositions and multi-phase query processing have been suggested and analyzed as single building blocks. In this paper, we describe a storage and access architecture which is composed from the above building blocks in a modular fashion. Additionally, we incorporate into our architecture a new ingredient, the scene organization, for efficiently supporting set-oriented access of large-area region queries. An experimental performance comparison demonstrates that the concept of scene organization leads to considerable performance improvements for large-area region queries by a factor of up to 150

    Ontology-based knowledge representation of experiment metadata in biological data mining

    Get PDF
    According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes

    Managing complexity in a distributed digital library

    Get PDF
    As the capabilities of distributed digital libraries increase, managing organizational and software complexity becomes a key issue. How can collections and indexes be updated without impacting queries currently in progress? How can the system handle several user-interface clients for the same collections? Computer science professors and lectors from the University of Waikato have developed a software structure that successfully manages this complexity in the New Zealand Digital Library. This digital library has been a success in managing organizational and software complexity. The researchers' primary goal has been to minimize the effort required to keep the system operational and yet continue to expand its offerings

    Zero and low carbon buildings: A driver for change in working practices and the use of computer modelling and visualization

    Get PDF
    Buildings account for significant carbon dioxide emissions, both in construction and operation. Governments around the world are setting targets and legislating to reduce the carbon emissions related to the built environment. Challenges presented by increasingly rigorous standards for construction projects will mean a paradigm shift in how new buildings are designed and managed. This will lead to the need for computational modelling and visualization of buildings and their energy performance throughout the life-cycle of the building. This paper briefly outline how the UK government is planning to reduce carbon emissions for new buildings. It discusses the challenges faced by the architectural, construction and building management professions in adjusting to the proposed requirements for low or zero carbon buildings. It then outlines how software tools, including the use of visualization tools, could develop to support the designer, contractor and user
    corecore