17,233 research outputs found

    Innovative strategies for 3D visualisation using photogrammetry and 3D scanning for mobile phones

    Get PDF
    3D model generation through Photogrammetry is a modern overlay of digital information representing real world objects in a virtual world. The immediate scope of this study aims at generating 3D models using imagery and overcoming the challenge of acquiring accurate 3D meshes. This research aims to achieve optimised ways to document raw 3D representations of real life objects and then converting them into retopologised, textured usable data through mobile phones. Augmented Reality (AR) is a projected combination of real and virtual objects. A lot of work is done to create market dependant AR applications so customers can view products before purchasing them. The need is to develop a product independent photogrammetry to AR pipeline which is freely available to create independent 3D Augmented models. Although for the particulars of this research paper, the aim would be to compare and analyse different open source SDK’s and libraries for developing optimised 3D Mesh using Photogrammetry/3D Scanning which will contribute as a main skeleton to the 3D-AR pipeline. Natural disasters, global political crisis, terrorist attacks and other catastrophes have led researchers worldwide to capture monuments using photogrammetry and laser scans. Some of these objects of “global importance” are processed by companies including CyArk (Cyber Archives) and UNESCO’s World Heritage Centre, who work against time to preserve these historical monuments, before they are damaged or in some cases completely destroyed. The need is to question the significance of preserving objects and monuments which might be of value locally to a city or town. What is done to preserve those objects? This research would develop pipelines for collecting and processing 3D data so the local communities could contribute towards restoring endangered sites and objects using their smartphones and making these objects available to be viewed in location based AR. There exist some companies which charge relatively large amounts of money for local scanning projects. This research would contribute as a non-profitable project which could be later used in school curriculums, visitor attractions and historical preservation organisations all over the globe at no cost. The scope isn’t limited to furniture, museums or marketing, but could be used for personal digital archiving as well. This research will capture and process virtual objects using Mobile Phones comparing methodologies used in Computer Vision design from data conversion on Mobile phones to 3D generation, texturing and retopologising. The outcomes of this research will be used as input for generating AR which is application independent of any industry or product

    The Design and Operation of The Keck Observatory Archive

    Get PDF
    The Infrared Processing and Analysis Center (IPAC) and the W. M. Keck Observatory (WMKO) operate an archive for the Keck Observatory. At the end of 2013, KOA completed the ingestion of data from all eight active observatory instruments. KOA will continue to ingest all newly obtained observations, at an anticipated volume of 4 TB per year. The data are transmitted electronically from WMKO to IPAC for storage and curation. Access to data is governed by a data use policy, and approximately two-thirds of the data in the archive are public.Comment: 12 pages, 4 figs, 4 tables. Presented at Software and Cyberinfrastructure for Astronomy III, SPIE Astronomical Telescopes + Instrumentation 2014. June 2014, Montreal, Canad

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    Computing and data processing

    Get PDF
    The applications of computers and data processing to astronomy are discussed. Among the topics covered are the emerging national information infrastructure, workstations and supercomputers, supertelescopes, digital astronomy, astrophysics in a numerical laboratory, community software, archiving of ground-based observations, dynamical simulations of complex systems, plasma astrophysics, and the remote control of fourth dimension supercomputers

    A geodatabase for multisource data applied to cultural heritage: The case study of Villa Revedin Bolasco

    Get PDF
    In this paper we present the results of the development of a Web-based archiving and documenting system aimed to the management of multisource and multitemporal data related to cultural heritage. As case study we selected the building complex of Villa Revedin Bolasco in Castefranco Veneto (Treviso, Italy) and its park. Buildings and park were built in XIX century after several restorations of the original XIV century area. The data management system relies on a geodatabase framework, in which different kinds of datasets were stored. More specifically, the geodatabase elements consist of historical information, documents, descriptions of artistic characteristics of the building and the park, in the form of text and images. In addition, we used also floorplans, sections and views of the outer facades of the building extracted by a TLS-based 3D model of the whole Villa. In order to manage and explore these rich dataset, we developed a geodatabase using PostgreSQL and PostGIS as spatial plugin. The Web-GIS platform, based on HTML5 and PHP programming languages, implements the NASA Web World Wind virtual globe, a 3D virtual globe we used to enable the navigation and interactive exploration of the park. Furthermore, through a specific timeline function, the user can explore the historical evolution of the building complex

    Web-based manipulation of multiresolution micro-CT images

    Get PDF
    Micro Computed-Tomography (mu-CT) scanning is opening a new world for medical researchers. Scientific data of several tens of gigabytes per image is created and usually requires storage on a common server such as Picture Archiving and Communication Systems (PACS). Previewing this data online in a meaningful way is an essential part of these systems. Radiologists who have been working with CT data for a long time are commonly looking at two-dimensional slices of 3D image stacks. Conventional web-viewers such as Google Maps and Deep Zoom use tiled multiresolution-images for faster display of large 2D data. In the medical area this approach is being adapted for high resolution 2D images. Solutions that include basic image processing still rely on browser external solutions and high-performance client-machines. In this paper we optimized and modified Brain Maps API to create an interactive orthogonal-sectioning image viewer for medical mu-CT scans, based on JavaScript and HTML5. We show that tiling of images reduces the processing time by a factor of two. Different file formats are compared regarding their quality and time to display. As well a sample end-to-end application demonstrates the feasibility of this solution for custom made image acquisition systems

    Standardization of electroencephalography for multi-site, multi-platform and multi-investigator studies: Insights from the canadian biomarker integration network in depression

    Get PDF
    Subsequent to global initiatives in mapping the human brain and investigations of neurobiological markers for brain disorders, the number of multi-site studies involving the collection and sharing of large volumes of brain data, including electroencephalography (EEG), has been increasing. Among the complexities of conducting multi-site studies and increasing the shelf life of biological data beyond the original study are timely standardization and documentation of relevant study parameters. We presentthe insights gained and guidelines established within the EEG working group of the Canadian Biomarker Integration Network in Depression (CAN-BIND). CAN-BIND is a multi-site, multi-investigator, and multiproject network supported by the Ontario Brain Institute with access to Brain-CODE, an informatics platform that hosts a multitude of biological data across a growing list of brain pathologies. We describe our approaches and insights on documenting and standardizing parameters across the study design, data collection, monitoring, analysis, integration, knowledge-translation, and data archiving phases of CAN-BIND projects. We introduce a custom-built EEG toolbox to track data preprocessing with open-access for the scientific community. We also evaluate the impact of variation in equipment setup on the accuracy of acquired data. Collectively, this work is intended to inspire establishing comprehensive and standardized guidelines for multi-site studies

    Digital forensics formats: seeking a digital preservation storage format for web archiving

    Get PDF
    In this paper we discuss archival storage formats from the point of view of digital curation and preservation. Considering established approaches to data management as our jumping off point, we selected seven format attributes which are core to the long term accessibility of digital materials. These we have labeled core preservation attributes. These attributes are then used as evaluation criteria to compare file formats belonging to five common categories: formats for archiving selected content (e.g. tar, WARC), disk image formats that capture data for recovery or installation (partimage, dd raw image), these two types combined with a selected compression algorithm (e.g. tar+gzip), formats that combine packing and compression (e.g. 7-zip), and forensic file formats for data analysis in criminal investigations (e.g. aff, Advanced Forensic File format). We present a general discussion of the file format landscape in terms of the attributes we discuss, and make a direct comparison between the three most promising archival formats: tar, WARC, and aff. We conclude by suggesting the next steps to take the research forward and to validate the observations we have made

    An investigation to examine the most appropriate methodology to capture historical and modern preserved anatomical specimens for use in the digital age to improve access: a pilot study

    Get PDF
    Anatomico-pathological specimens constitute a valuable component of many medical museums or institutional collections but can be limited in their impact on account of both physical and intellectual inaccessibility. Further concerns relate to conservation as anatomical specimens may be subject to tissue deterioration, constraints imposed by spatial or financial limitations of the host institution, or accident-based destruction. In awareness of these issues, a simple and easily implementable methodology to increase accessibility, impact and conservation of anatomical specimens is proposed which combines photogrammetry, object virtual reality (object VR), and interactive portable document format (PDF) with supplementary historical and anatomical commentary. The methodology was developed using wet, dry, and plastinated specimens from the historical and modern collections in the Museum of Anatomy at the University of Glasgow. It was found that photogrammetry yielded excellent results for plastinated specimens and showed potential for dry specimens, while object VR produced excellent photorealistic virtual specimens for all materials visualised. Use of PDF as output format was found to allow for the addition of textual, visual, and interactive content, and as such supplemented the virtual specimen with multidisciplinary information adaptable to the needs of various audiences. The results of this small-scale pilot study indicate the beneficial nature of combining these established techniques into a methodology for the digitisation and utilisation of historical anatomical collections in particular, but also collections of material culture more broadly

    Processing Internal Hard Drives - no cover page

    Get PDF
    • …
    corecore