255 research outputs found

    Visualisation of a three-dimensional (3D) object’s optimal reality in a 3D map on a mobile device

    Get PDF
    Prior research on the subject of visualisation of three-dimensional (3D) objects by coordinate systems has proved that all objects are translated so that the eye is at the origin (eye space). The multiplication of a point in eye space leads to perspective space, and dividing perspective space leads to screen space. This paper utilised these findings and investigated the key factor(s) in the visualisation of 3D objects within 3D maps on mobile devices. The motivation of the study comes from the fact that there is a disparity between 3D objects within a 3D map on a mobile device and those on other devices; this difference might undermine the capabilities of a 3D map view on a mobile device. This concern arises while interacting with a 3D map view on a mobile device. It is unclear whether an increasing number of users will be able to identify the real world as the 3D map view on a mobile device becomes more realistic. We used regression analysis intended to rigorously explain the participants’ responses and the Decision Making Trial and Evaluation Laboratory method (DEMATEL) to select the key factor(s) that caused or were affected by 3D object views. The results of regression analyses revealed that eye space, perspective space and screen space were associated with 3D viewing of 3D objects in 3D maps on mobile devices and that eye space had the strongest impact. The results of DEMATEL using its original and revised version steps showed that the prolonged viewing of 3D objects in a 3D map on mobile devices was the most important factor for eye space and a long viewing distance was the most significant factor for perspective space, while large screen size was the most important factor for screen space. In conclusion, a 3D map view on a mobile device allows for the visualisation of a more realistic environment

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    The use of low cost virtual reality and digital technology to aid forensic scene interpretation and recording

    Get PDF
    © Cranfield University 2005. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright owner.Crime scenes are often short lived and the opportunities must not be lost in acquiring sufficient information before the scene is disturbed. With the growth in information technology (IT) in many other scientific fields, there are also substantial opportunities for IT in the area of forensic science. The thesis sought to explore means by which IT can assist and benefit the ways that forensic information can be illustrated and elucidated in a logical manner. The central research hypothesis considers that through the utilisation of low cost IT, the visual presentation of information will be of significant benefit to forensic science in particular for the recoding of crime scenes and its presentation in court. The research hypothesis was addressed by first exploring the current crime scene documentation techniques; their strengths and weaknesses, giving indication to the possible niche that technology could occupy within forensic science. The underlying principles of panoramic technology were examined, highlighting its ability to express spatial information efficiently. Through literature review and case studies, the current status of the technology within the forensic community and courtrooms was also explored to gauge its possible acceptance as a forensic tool. This led to the construction of a low cost semi-automated imaging system capable of capturing the necessary images for the formation of a panorama. This provides the ability to pan around; effectively placing the viewer at the crime scene. Evaluation and analysis involving forensic personnel was performed to assess the capabilities and effectiveness of the imaging system as a forensic tool. The imaging system was found to enhance the repertoire of techniques available for crime scene documentation; possessing sufficient capabilities and benefits to warrant its use within the area of forensics, thereby supporting the central hypothesis

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Le Passage - Towards the Concept of a New Knowledge Instrument

    Get PDF
    This dissertation is concerned with the analysis and development of the passage concept in immersive dome environments (IDE). The research follows an interdisciplinary approach that draws on practices of scientific and artistic visualisation in the process of knowledge production. The research methodology is informed by my working practice, developing experiences for spherical displays, first inside fulldome planetariums, and currently also inside further 360° media formats such as VR (Virtual Reality), AR (Augmented Reality), and MR (Mixed Reality). The methodology is further underpinned by a media archaeology and interrogated through an ethnographic process of expert conversations and interviews. The media archaeology part involves the investigation of historical epistemic concepts in science communication in the fields of geography and cosmology used in spherical environments from the 17th century to the present day. The evolvement of the creation process for spherical environments shows how our thinking, understanding, and acting with spatial knowledge have shifted. The practical element involved is the construction process of passage corridors in science and art in order to generate new knowledge, which I define as passages. The passage concept is further enriched via the lenses of the art of understanding; the diagrammatic; and visuals as knowledge instruments. The main tool is the IDE, since it has the epistemic potential to create passages through time and scale. In this research the IDE is both an object of investigation, according to its historical classification and its immersive capabilities, and at the same time it is being analysed as an active instrument that produces knowledge and steers artistic language. It can be understood as a model, instrument, environment, and vehicle, being in a transitional state itself—from a historical planetarium environment to a new non-space, allowing for unique and engaging media art forms. In doing so, the IDE blends scientific frameworks with artistic processes, transforming the newest insights of immersive perception into a new state of the art. The IDE makes this evident through the method of passage and navigation. New future scenarios are presented whilst expanding the passage concept, which can aid our spatial localisation, orientation, and self-constitution, thus shifting our perspective from a sense of place to a sense of planet.Professorinnenprogramm des Bundes und der Länder, Fachhochschule Kiel (University of Applied Sciences Kiel

    A Phenomenological approach to media art environments: The Immersive art experience and the Finnish art scene

    Get PDF
    This research focuses on immersive art, defined as a multimedia experience where visitors interact with artwork whilst immersed in a range of sensory experiences. In this dissertation, I investigate the immersive art experience from the perspective of art history, social theory, and media studies situated within a phenomenological theoretical framework. I present a comparative analysis of forms of immersive spatiality, including projected moving-image art, spatial environments, participatory installations, video art installations and interactive environments in the international art scene. One of my objectives is to emphasise the role of video art in the development of interactive and immersive art environments. The growing importance of spectators for giving meaning to the artwork allows immersivity to be analysed in relation to the notions of spectacle and spectatorship. I connect disciplines, practices and concepts by adopting principles from Maurice Merleau-Ponty’s phenomenological writings. Spatiality and motility are pivotal points in immersive experiences. Immersive art, as an embodied mutual experience, materialises the phenomenological concepts of spectatorship, corporeality, motility, porosity, chiasm, and encounter. I have selected a group of relevant Finnish artists from different generations to characterise the development of media art and, particularly, immersive media art in an international context. The group includes Eija-Liisa Ahtila, Lauri Astala, Laura Beloff, Hanna Haaslahti, Tuomas A. Laitinen, Erkka Nissinen, and Marjatta Oja. I examine the historical dissemination of phenomenology in Finland and a renewed interest in the 1990s which coincided with the spatialisation of video art and the emergence of immersivity. I also investigate the opening of Kiasma Museum of Contemporary Art and its impact on Finnish culture, and the recent Amos Rex Museum, specifically built for immersive exhibitions. Regarding the unstable nature of media art, I analyse the changes in displaying art collections and exhibitions, the new commitments of art museums and the innovative directions taken by media conservators. My examination of immersive art, with its performativity and transience, reveals environmentally friendly and sustainable aspects.Fenomenologinen tulokulma mediataideympäristöihin. Immersiivinen taidekokemus ja Suomen taidekenttä Tämä tutkimus käsittelee immersiivistä taidetta multimediaalisena kokemuksena. Immersiossa kävijät ovat erilaisten aistimellisten kokemusten ympäröiminä vuorovaikutuksessa taiteen kanssa. Tutkin väitöskirjassani immersiivistä taidekokemusta fenomenologisessa teoriakehyksessä taidehistorian, yhteiskuntateorian ja mediatutkimuksen näkökulmasta. Esitän vertailevan analyysin immersiivisistä tilallisuuden muodoista, joihin sisällytän liikkuvan kuvan projisoinnit, tilateokset, osallistavat installaatiot, videoinstallaatiot ja interaktiiviset ympäristöt kansainvälisen taidekentän ilmiöinä. Yhtenä pyrkimyksenäni on painottaa videotaiteen merkitystä interaktiivisen ja immersiivisen taiteen kehityksessä. Katsojien kasvava rooli taideteoksen merkityksen muodostuksessa tarjoaa perustan immersion analyysille nimenomaan spektaakkelin ja katsojuuden viitekehyksessä. Yhdistän eri tieteenaloja, käytäntöjä ja käsitteitä toisiinsa Maurice Merleau-Pontyn fenomenologisten kirjoitusten avulla. Tilallisuus ja liike ovat immersiivisten kokemusten ytimessä. Jaettuna ruumiillisena kokemuksena immersiivinen taide ilmentää materiaalisesti fenomenologisia katsojuuden, ruumiillisuuden, liikkeessä olemisen, huokoisuuden, kiasman ja kohtaamisen käsitteitä. Olen valinnut joukon eri sukupolvia edustavia suomalaistaiteilijoita hahmot-taakseni mediataiteen ja erityisesti immersiivisen mediataiteen kansainvälisiä kehityskulkuja. Heihin lukeutuvat Eija-Liisa Ahtila, Lauri Astala, Laura Beloff, Hanna Haaslahti, Tuomas A. Laitinen, Erkka Nissinen ja Marjatta Oja. Käsittelen fenomenologian saapumista Suomeen sekä siihen 1990-luvulla videotaiteen tilallistumisen ja immersion esiin nousun yhteydessä uudelleen virinnyttä mielenkiintoa. Tarkastelen myös Nykytaiteen museo Kiasman perustamista ja sen vaikutusta suomalaiseen kulttuuriin, samoin kuin vastikään avattua Amos Rex -taidemuseota, joka on rakennettu erityisesti immersiivisiä näyttelyitä silmällä pitäen. Analysoin muutoksia taidekokoelmien ja näyttelyiden esillepanossa, taidemuseoiden uudenlaisia sitoumuksia ja mediataiteen kuratoinnin uutta luovia suuntia suhteessa mediataiteen nopeasti muuttuvaan luonteeseen. Painottamalla performatiivisuutta ja hetkellisyyttä nostan immersiivisen taiteen analyysissani näkyville sen ympäristöystävällisiä ja kestäviä ulottuvuuksia

    Graphics Insertions into Real Video for Market Research

    Get PDF

    An Orientation & Mobility Aid for People with Visual Impairments

    Get PDF
    Orientierung&Mobilität (O&M) umfasst eine Reihe von Techniken für Menschen mit Sehschädigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benötigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre täglichen Abläufe zu integrieren. Während einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestütztes Orientierungssystem, existiert eine unscheinbare Kommunikationslücke zwischen verfügbaren Hilfsmitteln und Navigationssystemen. In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwärtig geworden. Dies eröffnet modernen Techniken des maschinellen Sehens die Möglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstützen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persönlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen. In dieser Dissertation identifizieren wir eine räumliche und systembedingte Lücke zwischen Orientierungshilfen und Navigationssystemen für Menschen mit Sehschädigung. Die räumliche Lücke existiert hauptsächlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen können, die Umgebung in einem limitierten Bereich wahrzunehmen, während Navigationsinformationen nur sehr weitläufig gehalten sind. Zusätzlich entsteht diese Lücke auch systembedingt zwischen diesen beiden Komponenten — der Blinden-Langstock kennt die Route nicht, während ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene Ansätze zum Schließen dieser Lücke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nützliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige Schlüsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten. Existierende assistive Systeme zur Orientierung basieren hauptsächlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle Bedürfnisse anpassbar ist und diese berücksichtigt. Generierte Routen sind zwar unmerklich länger, aber auch viel sicherer, gemäß den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. Außerdem verbessern wir die Verfügbarkeit von relevanten georeferenzierten Datenbanken, die für ein derartiges bedarfsgerechtes Routing benötigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch über Ländergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik. Um den Nutzen von Mobilitätsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene Ansätze, um die räumliche Wahrnehmung der unmittelbaren Umgebung zu erhöhen. Zuerst betrachten wir dazu die verfügbare Freifläche und informieren auch über mögliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfügbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen überbrücken und bereits frühzeitig Informationen über die nächste Leitlinie bereitstellen. Abschließend verbessern wir die Zugänglichkeit von Fußgängerübergängen, insbesondere Zebrastreifen und Fußgängerampeln, mit einem Deep Learning Ansatz. Um zu analysieren, ob unsere erstellten Ansätze und Algorithmen einen tatsächlichen Mehrwert für Menschen mit Sehschädigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing — mit einem sehr ermutigendem Ergebnis. Weiterhin führen wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf Fußgängerübergänge durch. Obwohl unsere statistischen Auswertungen nur eine geringfügige Verbesserung aufzeigen, beeinflußt durch technische Probleme mit dem ersten Prototypen und einer zu geringen Eingewöhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daß wir bereits einen wichtigen ersten Schritt zum Schließen der identifizierten Lücke geleistet haben und Orientierung&Mobilität für Menschen mit Sehschädigung damit verbessern konnten
    corecore