661 research outputs found

    Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value

    Get PDF
    The artistic or historical value of a structure, such as a monument, a mosaic, a painting or, generally speaking, an artefact, arises from the novelty and the development it represents in a certain field and in a certain time of the human activity. The more faithfully the structure preserves its original status, the greater its artistic and historical value is. For this reason it is fundamental to preserve its original condition, maintaining it as genuine as possible over the time. Nevertheless the preservation of a structure cannot be always possible (for traumatic events as wars can occur), or has not always been realized, simply for negligence, incompetence, or even guilty unwillingness. So, unfortunately, nowadays the status of a not irrelevant number of such structures can range from bad to even catastrophic. In such a frame the current technology furnishes a fundamental help for reconstruction/restoration purposes, so to bring back a structure to its original historical value and condition. Among the modern facilities, new possibilities arise from the Augmented Reality (AR) tools, which combine the virtual reality (VR) settings with real physical materials and instruments. The idea is to realize a virtual reconstruction/restoration before materially acting on the structure itself. In this way main advantages are obtained among which: the manpower and machine power are utilized only in the last phase of the reconstruction; potential damages/abrasions of some parts of the structure are avoided during the cataloguing phase; it is possible to precisely define the forms and dimensions of the eventually missing pieces, etc. Actually the virtual reconstruction/restoration can be even improved taking advantages of the AR, which furnish lots of added informative parameters, which can be even fundamental under specific circumstances. So we want here detail the AR application to restore and reconstruct the structures with artistic and/or historical valu

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    New metric products, movies and 3D models from old stereopairs and their application to the in situ palaeontological site of Ambrona

    Get PDF
    [ES] Este artículo está basado en la información del siguiente proyecto:● LDGP_mem_006-1: "[S_Ambrona_Insitu] Levantamiento fotogramétrico del yacimiento paleontológico “Museo in situ” de Ambrona (Soria)", http://hdl.handle.net/10810/7353● LDGP_mem_006-1: "[S_Ambrona_Insitu] Levantamiento fotogramétrico del yacimiento paleontológico “Museo in situ” de Ambrona (Soria)", http://hdl.handle.net/10810/7353[EN] This paper is based on the information gathered in the following project:[EN] 3D modelling tools from photographic pictures have experienced significant improvements in the last years. One of the most outstanding changes is the spread of the photogrammetric systems based on algorithms referred to as Structure from Motion (SfM) in contrast with the traditional stereoscopic pairs. Nevertheless, the availability of important collections of stereoscopic registers collected during past decades invites us to explore the possibilities for re-using these photographs in order to generate new multimedia products, especially due to the fact that many of the documented elements have been largely altered or even disappeared. This article analyses an example of application to the re-use of a collection of photographs from the palaeontological site of Ambrona (Soria, Spain). More specifically, different pieces of software based on Structure from Motion (SfM) algorithms for the generation of 3D models with photographic textures are tested and some derived products such as orthoimages, video or applications of Augmented Reality (AR) are presented.[ES] Las herramientas de modelado 3D a partir de imágenes fotográficas han experimentado avances muy significativos en los últimos años. Uno de los más destacados corresponde a la generalización de los sistemas fotogramétricos basados en los algoritmos denominados Structure from Motion (SfM) sobre los proyectos de documentación tradicional basados en pares estereoscópicos. La existencia de importantes colecciones de registros estereoscópicos realizados durante las décadas anteriores invita a explorar las posibilidades de reutilización de estos registros para la obtención de productos multimedia actuales, máxime cuando algunos de los elementos documentados han sufrido grandes modificaciones o incluso desaparecido. En el presente artículo se analiza la reutilización de colecciones fotográficas de yacimientos paleontológicos mediante un ejemplo centrado en el yacimiento de Ambrona (Soria, España). En concreto, se contrastan varios programas basados en los algoritmos denominados Structure from Motion (SfM) para la generación del modelo 3D con textura y otros productos derivados como ortoimágenes, vídeos o aplicaciones de Realidad Aumentada (RA)

    Physical sketching tools and techniques for customized sensate surfaces

    Get PDF
    Sensate surfaces are a promising avenue for enhancing human interaction with digital systems due to their inherent intuitiveness and natural user interface. Recent technological advancements have enabled sensate surfaces to surpass the constraints of conventional touchscreens by integrating them into everyday objects, creating interactive interfaces that can detect various inputs such as touch, pressure, and gestures. This allows for more natural and intuitive control of digital systems. However, prototyping interactive surfaces that are customized to users' requirements using conventional techniques remains technically challenging due to limitations in accommodating complex geometric shapes and varying sizes. Furthermore, it is crucial to consider the context in which customized surfaces are utilized, as relocating them to fabrication labs may lead to the loss of their original design context. Additionally, prototyping high-resolution sensate surfaces presents challenges due to the complex signal processing requirements involved. This thesis investigates the design and fabrication of customized sensate surfaces that meet the diverse requirements of different users and contexts. The research aims to develop novel tools and techniques that overcome the technical limitations of current methods and enable the creation of sensate surfaces that enhance human interaction with digital systems.Sensorische Oberflächen sind aufgrund ihrer inhärenten Intuitivität und natürlichen Benutzeroberfläche ein vielversprechender Ansatz, um die menschliche Interaktionmit digitalen Systemen zu verbessern. Die jüngsten technologischen Fortschritte haben es ermöglicht, dass sensorische Oberflächen die Beschränkungen herkömmlicher Touchscreens überwinden, indem sie in Alltagsgegenstände integriert werden und interaktive Schnittstellen schaffen, die diverse Eingaben wie Berührung, Druck, oder Gesten erkennen können. Dies ermöglicht eine natürlichere und intuitivere Steuerung von digitalen Systemen. Das Prototyping interaktiver Oberflächen, die mit herkömmlichen Techniken an die Bedürfnisse der Nutzer angepasst werden, bleibt jedoch eine technische Herausforderung, da komplexe geometrische Formen und variierende Größen nur begrenzt berücksichtigt werden können. Darüber hinaus ist es von entscheidender Bedeutung, den Kontext, in dem diese individuell angepassten Oberflächen verwendet werden, zu berücksichtigen, da eine Verlagerung in Fabrikations-Laboratorien zum Verlust ihres ursprünglichen Designkontextes führen kann. Zudem stellt das Prototyping hochauflösender sensorischer Oberflächen aufgrund der komplexen Anforderungen an die Signalverarbeitung eine Herausforderung dar. Diese Arbeit erforscht dasDesign und die Fabrikation individuell angepasster sensorischer Oberflächen, die den diversen Anforderungen unterschiedlicher Nutzer und Kontexte gerecht werden. Die Forschung zielt darauf ab, neuartigeWerkzeuge und Techniken zu entwickeln, die die technischen Beschränkungen derzeitigerMethoden überwinden und die Erstellung von sensorischen Oberflächen ermöglichen, die die menschliche Interaktion mit digitalen Systemen verbessern

    Mobile and Low-cost Hardware Integration in Neurosurgical Image-Guidance

    Get PDF
    It is estimated that 13.8 million patients per year require neurosurgical interventions worldwide, be it for a cerebrovascular disease, stroke, tumour resection, or epilepsy treatment, among others. These procedures involve navigating through and around complex anatomy in an organ where damage to eloquent healthy tissue must be minimized. Neurosurgery thus has very specific constraints compared to most other domains of surgical care. These constraints have made neurosurgery particularly suitable for integrating new technologies. Any new method that has the potential to improve surgical outcomes is worth pursuing, as it has the potential to not only save and prolong lives of patients, but also increase the quality of life post-treatment. In this thesis, novel neurosurgical image-guidance methods are developed, making use of currently available, low-cost off-the-shelf components. In particular, a mobile device (e.g. smartphone or tablet) is integrated into a neuronavigation framework to explore new augmented reality visualization paradigms and novel intuitive interaction methods. The developed tools aim at improving image-guidance using augmented reality to improve intuitiveness and ease of use. Further, we use gestures on the mobile device to increase interactivity with the neuronavigation system in order to provide solutions to the problem of accuracy loss or brain shift that occurs during surgery. Lastly, we explore the effectiveness and accuracy of low-cost hardware components (i.e. tracking systems and ultrasound) that could be used to replace the current high cost hardware that are integrated into commercial image-guided neurosurgery systems. The results of our work show the feasibility of using mobile devices to improve neurosurgical processes. Augmented reality enables surgeons to focus on the surgical field while getting intuitive guidance information. Mobile devices also allow for easy interaction with the neuronavigation system thus enabling surgeons to directly interact with systems in the operating room to improve accuracy and streamline procedures. Lastly, our results show that low-cost components can be integrated into a neurosurgical guidance system at a fraction of the cost, while having a negligible impact on accuracy. The developed methods have the potential to improve surgical workflows, as well as democratize access to higher quality care worldwide

    EyeRing: A Finger-Worn Input Device for Seamless Interactions with Our Surroundings

    Get PDF
    Finger-worn interfaces remain a vastly unexplored space for user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. In this paper we present design guidelines and implementation of a finger-worn I/O device, the EyeRing, which leverages the universal and natural gesture of pointing. We present use cases of EyeRing for both visually impaired and sighted people. We discuss initial reactions from visually impaired users which suggest that EyeRing may indeed offer a more seamless solution for dealing with their immediate surroundings than the solutions they currently use. We also report on a user study that demonstrates how EyeRing reduces effort and disruption to a sighted user. We conclude that this highly promising form factor offers both audiences enhanced, seamless interaction with information related to objects in the environment.Singapore University of Technology and Design. International Design Center (IDC grant IDG31100104A)Singapore University of Technology and Design. International Design Center (IDC grant IDD41100102A

    An Investigation in printing from a remote field location using wireless communications

    Get PDF
    Computers have changed the way our society works. Everyday life is somehow effected by a computer. It has changed the way many industries do their business. The business world is now a global community. The Graphic Arts industry has been impacted by these changes. With computers, documents are now found in digital form. Instead of being hand prepared, they are compiled within the computer realm. By using modems, these documents can travel from one location and be printed at several different locations, even world wide. As the computer evolves, it is also becoming more portable so that our mobile society is not tied to one location. Along with this mobility, there is a strong trend in communications that are mobile also. Wireless technologies are advancing at a rapid rate to keep up with customer demand. By combining these emerging technologies, is it possible that a person with some knowledge of computers and peripherals, desktop publishing, and digital photography can transmit documents using cellular communications from a field location to a digital press to produce a finished product? A digital camera was used to capture images for a test document. The images were then downloaded to a laptop computer. From there changes were made to the images that fit the parameters of the final output device. Using Quark XPress the test document was prepared. It included four of the images taken with the digital camera. When the document was complete it was saved to a PostScript file. The transmission of the file was possible by using a PCMCIA Fax/modem card installed in the portable computer. This was connected using a special cellular phone adapter with a Motorola Elite Cellular Phone. Two transmission tests were attempted. The first test used the internet as a means to connect to the file server at the Digital Publishing Center at Rochester Institute of Technology. The second test used Xerox proprietary software Launch to link the computer to the file server at Suttons\u27 Printing in Grand Junction, Colorado. In test one, the transmission of the file was complete. Upon going to the Digital Publishing Center, the file was transferred from the file server to the job manager on the Docutech 9500. A proof was made and fifteen final copies were produced. In test two, the transmission was unable to be completed. The file server was accessed and logged on but the downloading of the file was never completed. The file server would disconnect shortly after the downloading process would begin. Test one results showed that it is possible to transmit a digital file using wireless communication and successfully print an acceptable quality product. Test two shows that it is possible that problems can occur so that transmission cannot be completed. This technology is still in its infantile stages. As these technologies continue to advance this type of transmission could become more common place, allowing printing to take place anywhere
    corecore