6,660 research outputs found

    Augmented reality usage for prototyping speed up

    Full text link
    The first part of the article describes our approach for solution of this problem by means of Augmented Reality. The merging of the real world model and digital objects allows streamline the work with the model and speed up the whole production phase significantly. The main advantage of augmented reality is the possibility of direct manipulation with the scene using a portable digital camera. Also adding digital objects into the scene could be done using identification markers placed on the surface of the model. Therefore it is not necessary to work with special input devices and lose the contact with the real world model. Adjustments are done directly on the model. The key problem of outlined solution is the ability of identification of an object within the camera picture and its replacement with the digital object. The second part of the article is focused especially on the identification of exact position and orientation of the marker within the picture. The identification marker is generalized into the triple of points which represents a general plane in space. There is discussed the space identification of these points and the description of representation of their position and orientation be means of transformation matrix. This matrix is used for rendering of the graphical objects (e. g. in OpenGL and Direct3D).Comment: Keywords: augmented reality, prototyping, pose estimation, transformation matri

    Hydro-NEXRAD-2: Real-time Access To Customized Radar-rainfall For Hydrologic Applications

    Get PDF
    Hydro-NEXRAD-2 (HNX2) is a prototype system that allows hydrologic users real-time access to NEXRAD radar data in support of a wide range of research. The system processes basic radar data (Level II) and delivers radar-rainfall products based on the user\u27s custom selection of features such as spatial domain, rainfall product space and time resolution, and rainfall estimation algorithms. HNX2 collects real-time, unprocessed data from multiple NEXRAD radars as they become available, processes them through a user-configurable pipeline of data-processing modules, and publishes the processed data-products at regular intervals. Modules in the data-processing pipeline encapsulate algorithms such as non-meteorological echo detection, radar range correction, radar-reflectivity-rain rate (Z-R) conversion, echo advection correction, mosaicking of products from multiple radars, and grid projections and transformations. This paper describes the challenges involved in HNX2\u27s development and implementation, which include real-time error-handling, time-synchronization of data from multiple asynchronous sources, generation of multiple-radar metadata products, and distribution of products to a user base with diverse needs and constraints. HNX2 publishes products through automation and allows multiple users access to published products. Currently, HNX2 is serving near real-time rain-rate maps for Iowa in the USA using data from seven radars covering the state. Hydrologic models operated by The University of Iowa\u27s Iowa Flood Center use these products. © IWA Publishing 2013

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    Semantically intelligent semi-automated ontology integration

    Get PDF
    An ontology is a way of information categorization and storage. Web Ontologies provide help in retrieving the required and precise information over the web. However, the problem of heterogeneity between ontologies may occur in the use of multiple ontologies of the same domain. The integration of ontologies provides a solution for the heterogeneity problem. Ontology integration is a solution to problem of interoperability in the knowledge based systems. Ontology integration provides a mechanism to find the semantic association between a pair of reference ontologies based on their concepts. Many researchers have been working on the problem of ontology integration; however, multiple issues related to ontology integration are still not addressed. This dissertation involves the investigation of the ontology integration problem and proposes a layer based enhanced framework as a solution to the problem. The comparison between concepts of reference ontologies is based on their semantics along with their syntax in the concept matching process of ontology integration. The semantic relationship of a concept with other concepts between ontologies and the provision of user confirmation (only for the problematic cases) are also taken into account in this process. The proposed framework is implemented and validated by providing a comparison of the proposed concept matching technique with the existing techniques. The test case scenarios are provided in order to compare and analyse the proposed framework in the analysis phase. The results of the experiments completed demonstrate the efficacy and success of the proposed framework
    corecore