799 research outputs found

    An Information Extraction Approach to Reorganizing and Summarizing Specifications

    Get PDF
    Materials and Process Specifications are complex semi-structured documents containing numeric data, text, and images. This article describes a coarse-grain extraction technique to automatically reorganize and summarize spec content. Specifically, a strategy for semantic-markup, to capture content within a semantic ontology, relevant to semi-automatic extraction, has been developed and experimented with. The working prototypes were built in the context of Cohesia\u27s existing software infrastructure, and use techniques from Information Extraction, XML technology, etc

    Solutions for decision support in university management

    Get PDF
    The paper proposes an overview of decision support systems in order to define the role of a system to assist decision in university management. The authors present new technologies and the basic concepts of multidimensional data analysis using models of business processes within the universities. Based on information provided by scientific literature and on the authors’ experience, the study aims to define selection criteria in choosing a development environment for designing a support system dedicated to university management. The contributions consist in designing a data warehouse model and models of OLAP analysis to assist decision in university management.university management, decision support, multidimensional analysis, data warehouse, OLAP

    An XML-Based Approach to Handling Tables in Documents

    Get PDF
    We explore application of XML technology for handling tables in legacy semi-structured documents. Specifically, we analyze annotating heterogeneous documents containing tables to obtain a formalized XML Master document that improves traceability (hence easing verification and update) and enables manipulation using XSLT stylesheets. This approach is useful when table instances far outnumber distinct table types because the effort required to annotate a table instance is relatively less compared to formalizing table processing that respects table’s semantics. This work is also relevant for authoring new documents with tables that should be accessible to both humans and machines

    Distributed data service for data management in internet of things middleware

    Get PDF
    The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware

    Semantics-Empowered Big Data Processing with Applications

    Get PDF
    We discuss the nature of Big Data and address the role of semantics in analyzing and processing Big Data that arises in the context of Physical-Cyber-Social Systems. We organize our research around the Five Vs of Big Data, where four of the Vs are harnessed to produce the fifth V - value. To handle the challenge of Volume, we advocate semantic perception that can convert low-level observational data to higher-level abstractions more suitable for decision-making. To handle the challenge of Variety, we resort to the use of semantic models and annotations of data so that much of the intelligent processing can be done at a level independent of heterogeneity of data formats and media. To handle the challenge of Velocity, we seek to use continuous semantics capability to dynamically create event or situation specific models and recognize relevant new concepts, entities and facts. To handle Veracity, we explore the formalization of trust models and approaches to glean trustworthiness. The above four Vs of Big Data are harnessed by the semantics-empowered analytics to derive value for supporting practical applications transcending physical-cyber-social continuum

    Is Bigger Always Better? Lessons Learnt from the Evolution of Deep Learning Architectures for Image Classification

    Get PDF
    There exist numerous scientific contributions to the design of deep learning networks. However, using the right architecture that is suited for a given business problem with all constraints such as memory and inference time requirements can be cumbersome. We reflect on the evolution of the state-of-the-art architectures for convolutional neural networks(CNN) for the case of image classification. We compare architectures regarding classification results, model size, and inference time to discuss the choices of designs for CNN architectures. To maintain scientific comprehensibility, the established ILSVRC benchmark is used as a basis for model selection and benchmark data. The quantitative comparison shows that while the model size and the required inference time correlate with result accuracy across all architectures, there are major trade-offs between those factors. The qualitative analysis further depicts that published models always build on previous research and adopt improved components in either evolutionary or revolutionary ways. Finally, we discuss design and result improvement during the evolution of CNN architectures. Further, we derive practical implications for designing deep learning network
    corecore