104,338 research outputs found

    An approach for real world data modelling with the 3D terrestrial laser scanner for built environment

    Get PDF
    Capturing and modelling 3D information of the built environment is a big challenge. A number of techniques and technologies are now in use. These include EDM, GPS, and photogrammetric application, remote sensing and traditional building surveying applications. However, use of these technologies cannot be practical and efficient in regard to time, cost and accuracy. Furthermore, a multi disciplinary knowledge base, created from the studies and research about the regeneration aspects is fundamental: historical, architectural, archeologically, environmental, social, economic, etc. In order to have an adequate diagnosis of regeneration, it is necessary to describe buildings and surroundings by means of documentation and plans. However, at this point in time the foregoing is considerably far removed from the real situation, since more often than not it is extremely difficult to obtain full documentation and cartography, of an acceptable quality, since the material, constructive pathologies and systems are often insufficient or deficient (flat that simply reflects levels, isolated photographs,..). Sometimes the information in reality exists, but this fact is not known, or it is not easily accessible, leading to the unnecessary duplication of efforts and resources. In this paper, we discussed 3D laser scanning technology, which can acquire high density point data in an accurate, fast way. Besides, the scanner can digitize all the 3D information concerned with a real world object such as buildings, trees and terrain down to millimetre detail Therefore, it can provide benefits for refurbishment process in regeneration in the Built Environment and it can be the potential solution to overcome the challenges above. The paper introduce an approach for scanning buildings, processing the point cloud raw data, and a modelling approach for CAD extraction and building objects classification by a pattern matching approach in IFC (Industry Foundation Classes) format. The approach presented in this paper from an undertaken research can lead to parametric design and Building Information Modelling (BIM) for existing structures. Two case studies are introduced to demonstrate the use of laser scanner technology in the Built Environment. These case studies are the Jactin House Building in East Manchester and the Peel building in the campus of University Salford. Through these case studies, while use of laser scanners are explained, the integration of it with various technologies and systems are also explored for professionals in Built Environmen

    Compensation methods to support generic graph editing: A case study in automated verification of schema requirements for an advanced transaction model

    Get PDF
    Compensation plays an important role in advanced transaction models, cooperative work, and workflow systems. However, compensation operations are often simply written as a^−1 in transaction model literature. This notation ignores any operation parameters, results, and side effects. A schema designer intending to use an advanced transaction model is expected (required) to write correct method code. However, in the days of cut-and-paste, this is much easier said than done. In this paper, we demonstrate the feasibility of using an off-the-shelf theorem prover (also called a proof assistant) to perform automated verification of compensation requirements for an OODB schema. We report on the results of a case study in verification for a particular advanced transaction model that supports cooperative applications. The case study is based on an OODB schema that provides generic graph editing functionality for the creation, insertion, and manipulation of nodes and links

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Obvious: a meta-toolkit to encapsulate information visualization toolkits. One toolkit to bind them all

    Get PDF
    This article describes “Obvious”: a meta-toolkit that abstracts and encapsulates information visualization toolkits implemented in the Java language. It intends to unify their use and postpone the choice of which concrete toolkit(s) to use later-on in the development of visual analytics applications. We also report on the lessons we have learned when wrapping popular toolkits with Obvious, namely Prefuse, the InfoVis Toolkit, partly Improvise, JUNG and other data management libraries. We show several examples on the uses of Obvious, how the different toolkits can be combined, for instance sharing their data models. We also show how Weka and RapidMiner, two popular machine-learning toolkits, have been wrapped with Obvious and can be used directly with all the other wrapped toolkits. We expect Obvious to start a co-evolution process: Obvious is meant to evolve when more components of Information Visualization systems will become consensual. It is also designed to help information visualization systems adhere to the best practices to provide a higher level of interoperability and leverage the domain of visual analytics

    Design of a shared whiteboard component for multimedia conferencing

    Get PDF
    This paper reports on the development of a framework for multimedia applications in the domain of tele-education. The paper focuses on the protocol design of a specific component of the framework, namely a shared whiteboard application. The relationship of this component with other components of the framework is also discussed. A salient feature of the framework is that it uses an advanced ATM-based network service. The design of the shared whiteboard component is considered representative for the design as a whole, and is used to illustrate how a flexible protocol architecture utilizing innovative network functions and satisfying demanding user requirements can be developed

    Compensation methods to support cooperative applications: A case study in automated verification of schema requirements for an advanced transaction model

    Get PDF
    Compensation plays an important role in advanced transaction models, cooperative work and workflow systems. A schema designer is typically required to supply for each transaction another transaction to semantically undo the effects of . Little attention has been paid to the verification of the desirable properties of such operations, however. This paper demonstrates the use of a higher-order logic theorem prover for verifying that compensating transactions return a database to its original state. It is shown how an OODB schema is translated to the language of the theorem prover so that proofs can be performed on the compensating transactions
    corecore