7,779 research outputs found

    Integrating Distributed Sources of Information for Construction Cost Estimating using Semantic Web and Semantic Web Service technologies

    Get PDF
    A construction project requires collaboration of several organizations such as owner, designer, contractor, and material supplier organizations. These organizations need to exchange information to enhance their teamwork. Understanding the information received from other organizations requires specialized human resources. Construction cost estimating is one of the processes that requires information from several sources including a building information model (BIM) created by designers, estimating assembly and work item information maintained by contractors, and construction material cost data provided by material suppliers. Currently, it is not easy to integrate the information necessary for cost estimating over the Internet. This paper discusses a new approach to construction cost estimating that uses Semantic Web technology. Semantic Web technology provides an infrastructure and a data modeling format that enables accessing, combining, and sharing information over the Internet in a machine processable format. The estimating approach presented in this paper relies on BIM, estimating knowledge, and construction material cost data expressed in a web ontology language. The approach presented in this paper makes the various sources of estimating data accessible as Simple Protocol and Resource Description Framework Query Language (SPARQL) endpoints or Semantic Web Services. We present an estimating application that integrates distributed information provided by project designers, contractors, and material suppliers for preparing cost estimates. The purpose of this paper is not to fully automate the estimating process but to streamline it by reducing human involvement in repetitive cost estimating activities

    A lightweight web video model with content and context descriptions for integration with linked data

    Get PDF
    The rapid increase of video data on the Web has warranted an urgent need for effective representation, management and retrieval of web videos. Recently, many studies have been carried out for ontological representation of videos, either using domain dependent or generic schemas such as MPEG-7, MPEG-4, and COMM. In spite of their extensive coverage and sound theoretical grounding, they are yet to be widely used by users. Two main possible reasons are the complexities involved and a lack of tool support. We propose a lightweight video content model for content-context description and integration. The uniqueness of the model is that it tries to model the emerging social context to describe and interpret the video. Our approach is grounded on exploiting easily extractable evolving contextual metadata and on the availability of existing data on the Web. This enables representational homogeneity and a firm basis for information integration among semantically-enabled data sources. The model uses many existing schemas to describe various ontology classes and shows the scope of interlinking with the Linked Data cloud

    Bridging the Semantic Gap in Multimedia Information Retrieval: Top-down and Bottom-up approaches

    No full text
    Semantic representation of multimedia information is vital for enabling the kind of multimedia search capabilities that professional searchers require. Manual annotation is often not possible because of the shear scale of the multimedia information that needs indexing. This paper explores the ways in which we are using both top-down, ontologically driven approaches and bottom-up, automatic-annotation approaches to provide retrieval facilities to users. We also discuss many of the current techniques that we are investigating to combine these top-down and bottom-up approaches

    Opening up Magpie via semantic services

    Get PDF
    Magpie is a suite of tools supporting a ‘zero-cost’ approach to semantic web browsing: it avoids the need for manual annotation by automatically associating an ontology-based semantic layer to web resources. An important aspect of Magpie, which differentiates it from superficially similar hypermedia systems, is that the association between items on a web page and semantic concepts is not merely a mechanism for dynamic linking, but it is the enabling condition for locating services and making them available to a user. These services can be manually activated by a user (pull services), or opportunistically triggered when the appropriate web entities are encountered during a browsing session (push services). In this paper we analyze Magpie from the perspective of building semantic web applications and we note that earlier implementations did not fulfill the criterion of “open as to services”, which is a key aspect of the emerging semantic web. For this reason, in the past twelve months we have carried out a radical redesign of Magpie, resulting in a novel architecture, which is open both with respect to ontologies and semantic web services. This new architecture goes beyond the idea of merely providing support for semantic web browsing and can be seen as a software framework for designing and implementing semantic web applications

    Magpie: towards a semantic web browser

    Get PDF
    Web browsing involves two tasks: finding the right web page and then making sense of its content. So far, research has focused on supporting the task of finding web resources through ‘standard’ information retrieval mechanisms, or semantics-enhanced search. Much less attention has been paid to the second problem. In this paper we describe Magpie, a tool which supports the interpretation of web pages. Magpie offers complementary knowledge sources, which a reader can call upon to quickly gain access to any background knowledge relevant to a web resource. Magpie automatically associates an ontologybased semantic layer to web resources, allowing relevant services to be invoked within a standard web browser. Hence, Magpie may be seen as a step towards a semantic web browser. The functionality of Magpie is illustrated using examples of how it has been integrated with our lab’s web resources

    Desiderata for an Every Citizen Interface to the National Information Infrastructure: Challenges for NLP

    Get PDF
    In this paper, I provide desiderata for an interface that would enable ordinary people to properly access the capabilities of the NII. I identify some of the technologies that will be needed to achieve these desiderata, and discuss current and future research directions that could lead to the development of such technologies. In particular, I focus on the ways in which theory and techniques from natural language processing could contribute to future interfaces to the NII. Introduction The evolving national information infrastructure (NII) has made available a vast array of on-line services and networked information resources in a variety of forms (text, speech, graphics, images, video). At the same time, advances in computing and telecommunications technology have made it possible for an increasing number of households to own (or lease or use) powerful personal computers that are connected to this resource. Accompanying this progress is the expectation that people will be able to more..

    Generating natural language specifications from UML class diagrams

    Get PDF
    Early phases of software development are known to be problematic, difficult to manage and errors occurring during these phases are expensive to correct. Many systems have been developed to aid the transition from informal Natural Language requirements to semistructured or formal specifications. Furthermore, consistency checking is seen by many software engineers as the solution to reduce the number of errors occurring during the software development life cycle and allow early verification and validation of software systems. However, this is confined to the models developed during analysis and design and fails to include the early Natural Language requirements. This excludes proper user involvement and creates a gap between the original requirements and the updated and modified models and implementations of the system. To improve this process, we propose a system that generates Natural Language specifications from UML class diagrams. We first investigate the variation of the input language used in naming the components of a class diagram based on the study of a large number of examples from the literature and then develop rules for removing ambiguities in the subset of Natural Language used within UML. We use WordNet,a linguistic ontology, to disambiguate the lexical structures of the UML string names and generate semantically sound sentences. Our system is developed in Java and is tested on an independent though academic case study
    corecore