19 research outputs found

    An effective method to obtain contour of fisheye images based on explicit level set method

    Get PDF
    Obtaining the effective contour from an image taken by fisheye lens is important for the following transactions. Many studies try to develop suitable methods to get accurate contours of fisheye images. Using the traditional level set method (CV model) is hard to meet the desire task that the final segmentation region is a circle. Therefore, the preprocessing of fisheye images and the improvement of traditional level set method are redesigned to get a final circular segmentation which may be suitable to other applications. In this paper, we use the local entropy method to make the value of pixels be even inside the effective circular region, further threshold method to remove the hole(s), and at last the explicit circular level set method to get final segmentation. The final experimental results show that the segmentation is effective

    3D panoramic imaging for virtual environment construction

    Get PDF
    The project is concerned with the development of algorithms for the creation of photo-realistic 3D virtual environments, overcoming problems in mosaicing, colour and lighting changes, correspondence search speed and correspondence errors due to lack of surface texture. A number of related new algorithms have been investigated for image stitching, content based colour correction and efficient 3D surface reconstruction. All of the investigations were undertaken by using multiple views from normal digital cameras, web cameras and a ”one-shot” panoramic system. In the process of 3D reconstruction a new interest points based mosaicing method, a new interest points based colour correction method, a new hybrid feature and area based correspondence constraint and a new structured light based 3D reconstruction method have been investigated. The major contributions and results can be summarised as follows: • A new interest point based image stitching method has been proposed and investigated. The robustness of interest points has been tested and evaluated. Interest points have been proved robust to changes in lighting, viewpoint, rotation and scale. • A new interest point based method for colour correction has been proposed and investigated. The results of linear and linear plus affine colour transforms have proved more accurate than traditional diagonal transforms in accurately matching colours in panoramic images. • A new structured light based method for correspondence point based 3D reconstruction has been proposed and investigated. The method has been proved to increase the accuracy of the correspondence search for areas with low texture. Correspondence speed has also been increased with a new hybrid feature and area based correspondence search constraint. • Based on the investigation, a software framework has been developed for image based 3D virtual environment construction. The GUI includes abilities for importing images, colour correction, mosaicing, 3D surface reconstruction, texture recovery and visualisation. • 11 research papers have been published.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones

    Social Robot Augmented Telepresence For Remote Assessment And Rehabilitation Of Patients With Upper Extremity Impairment

    Get PDF
    With the shortage of rehabilitation clinicians in rural areas and elsewhere, remote rehabilitation (telerehab) fills an important gap in access to rehabilitation. We have developed a first of its kind social robot augmented telepresence (SRAT) system --- Flo --- which consists of a humanoid robot mounted onto a mobile telepresence base, with the goal of improving the quality of telerehab. The humanoid has arms, a torso, and a face to play games with and guide patients under the supervision of a remote clinician. To understand the usability of this system, we conducted a survey of hundreds of rehab clinicians. We found that therapists in the United States believe Flo would improve communication, patient motivation, and patient compliance, compared to traditional telepresence for rehab. Therapists highlighted the importance of high-quality video to enable telerehab with their patients and were positive about the usefulness of features which make up the Flo system for enabling telerehab. To compare telepresence interactions with vs without the social robot, we conducted controlled studies, the first to rigorously compare SRAT to classical telepresence (CT). We found that for many SRAT is more enjoyable than and preferred over CT. The results varied by age, motor function, and cognitive function, a novel result. To understand how therapists and patients respond to and use SRAT in the wild over long-term use, we deployed Flo at an elder care facility. Therapists used Flo with their own patients however they deemed best. They developed new ways to use the system and highlighted challenges they faced. To ease the load of performing assessments via telepresence, I constructed a pipeline to predict the motor function of patients using RGBD video of them doing activities via telepresence. The pipeline extracts poses from the video, calculates kinematic features and reachable workspace, and predicts level of impairment using a random forest of decision trees. Finally, I have aggregated our findings over all these studies and provide a path forward to continue the evolution of SRAT

    Walking away from VR as ‘empathy-machine’: peripatetic animations with 360-photogrammetry

    Get PDF
    My research partakes in an expanded documentary practice that weaves together walking, immersive technologies, and moving image. Two lines of enquiry motivate the research journey: the first responds to the trope of VR as 'empathy-machine' (Milk, 2015), often accompanied by the expression 'walking in someone else's shoes'. Within a research project that begins on foot, the idiom’s significance demands investigation. The second line of enquiry pursues a collaborative artistic practice informed by dialogue and poetry, where the bipedals of walking and the binaries of the digital are entwined by phenomenology, hauntology, performance, and the in-betweens of animation. My practice-as-research methodology involves desk study, experimentation with VR, AR, digital photogrammetry, and CGI animation. Central to my approach is the multifaceted notion of Peripatos ̶ as a school of philosophy, a stroll-like walk, and the path where the stroll takes place ̶ manifested both corporeally and as 'playful curiosity'. The thread that interweaves practice and theory has my body-moving in the centre; I call it the ‘camera-walk’: a processional shoot that documents a real place and the bodies that make it, while my hand holds high a camera-on-a-stick shooting 360-video. The resulting spherical video feeds into photogrammetric digital processing, and reassembles into digital 3D models that form the starting ground for still images, a site-specific installation, augmented reality (AR) exchanges, and short films. Because 360-video includes the body that carries the camera, the digital meshes produced by the ‘camera-walk’ also reveal the documentarian during the act of documenting. Departing from the pursuit of perfect replicas, my research articulates the iconic lineage of photogrammetry, embracing imperfections as integral. Despite the planned obsolescence of my digital instruments, I treat my 360-camera as a ‘dangerous tool’, uncovering (and inventing) its hidden virtualities, via Vilém Flusser. Against its formative intentions as an accessory for extreme sports, I focus on everyday life, and become inspired by Harun Farocki’s ‘another kind of empathy’. Within the collaborative projects presented within my thesis, I move away from the colonialist-inspired ideal of ‘walking in someone else’s shoes’, and ‘tread softly’ along the footsteps of my co-walkers

    Localisation and tracking of stationary users for extended reality

    Get PDF
    In this thesis, we investigate the topics of localisation and tracking in the context of Extended Reality. In many on-site or outdoor Augmented Reality (AR) applications, users are standing or sitting in one place and performing mostly rotational movements, i.e. stationary. This type of stationary motion also occurs in Virtual Reality (VR) applications such as panorama capture by moving a camera in a circle. Both applications require us to track the motion of a camera in potentially very large and open environments. State-of-the-art methods such as Structure-from-Motion (SfM), and Simultaneous Localisation and Mapping (SLAM), tend to rely on scene reconstruction from significant translational motion in order to compute camera positions. This can often lead to failure in application scenarios such as tracking for seated sport spectators, or stereo panorama capture where the translational movement is small compared to the scale of the environment. To begin with, we investigate the topic of localisation as it is key to providing global context for many stationary applications. To achieve this, we capture our own datasets in a variety of large open spaces including two sports stadia. We then develop and investigate these techniques in the context of these sports stadia using a variety of state-of-the-art localisation approaches. We cover geometry-based methods to handle dynamic aspects of a stadium environment, as well as appearance-based methods, and compare them to a state-of-the-art SfM system to identify the most applicable methods for server-based and on-device localisation. Recent work in SfM has shown that the type of stationary motion that we target can be reliably estimated by applying spherical constraints to the pose estimation. In this thesis, we extend these concepts into a real-time keyframe-based SLAM system for the purposes of AR, and develop a unique data structure for simplifying keyframe selection. We show that our constrained approach can track more robustly in these challenging stationary scenarios compared to state-of-the-art SLAM through both synthetic and real-data tests. In the application of capturing stereo panoramas for VR, this thesis demonstrates the unsuitability of standard SfM techniques for reconstructing these circular videos. We apply and extend recent research in spherically constrained SfM to creating stereo panoramas and compare this with state-of-the-art general SfM in a technical evaluation. With a user study, we show that the motion requirements of our SfM approach are similar to the natural motion of users, and that a constrained SfM approach is sufficient for providing stereoscopic effects when viewing the panoramas in VR

    Spinoff 2008: 50 Years of NASA-Derived Technologies (1958-2008)

    Get PDF
    NASA Technology Benefiting Society subject headings include: Health and Medicine, Transportation, Public Safety, Consumer, Home and Recreation, Environmental and Agricultural Resources, Computer Technology, and Industrial Productivity. Other topics covered include: Aeronautics and Space Activities, Education News, Partnership News, and the Innovative Partnership Program
    corecore