40 research outputs found

    Visual Place Recognition From Eye Reflection

    Get PDF
    The cornea in the human eye reflects incoming environmental light, which means we can obtain information about the surrounding environment from the corneal reflection in facial images. In recent years, as the quality of consumer cameras increases, this has caused privacy concerns in terms of identifying the people around the subject or where the photo is taken. This paper investigates the security risk of eye corneal reflection images: specifically, visual place recognition from eye reflection images. First, we constructed two datasets containing pairs of scene and corneal reflection images. The first dataset is taken in a virtual environment. We showed pre-captured scene images in a 180-degree surrounding display system and took corneal reflections from subjects. The second dataset is taken in an outdoor environment. We developed several visual place recognition algorithms, including CNN-based image descriptors featuring a naive Siamese network and AFD-Net combined with entire image feature representations including VLAD and NetVLAD, and compared the results. We found that AFD-Net+VLAD performed the best and was able to accurately determine the scene in 73.08% of the top-five candidate scenes. These results demonstrate the potential to estimate the location at which a facial picture was taken, which simultaneously leads to a) positive applications such as the localization of a robot while conversing with persons and b) negative scenarios including the security risk of uploading facial images to the public

    SMART WEARABLES: ADVANCING MYOPIA RESEARCH THROUGH QUANTIFICATION OF THE VISUAL ENVIRONMENT

    Full text link
    Myopia development has been attributed to eyeball elongation, but its driving force is not fully understood. Previous research suggests lack of time spent outdoors with exposure to high light levels or time spent on near-work as potential environmental risk factors. Although light levels are quantifiable with wearables, near-work relies solely on questionnaires for data collection and there remains a risk of subjective bias. Studies spanning decades identified that eye growth is optically guided. This proposal received further support from recent findings of larger changes in the thickness of the eye’s choroidal layer after short-term optical interventions compared with daily eye-length changes attributed to myopia. Most of these studies used a monocular optical appliance to manipulate potential myogenic factors, which may introduce confounders by disrupting the natural functionality of the visual system. This thesis reports on improvements in systems for characterising the visual dioptric space and its application to myopia studies. Understanding the driving forces of myopia will prevent related vision loss. Study I: An eye-tracker was developed and validated that incorporated time-of-flight (ToF) technology to obtain spatial information of the wearer’s field of view. By matching gaze data with point cloud data, the distance to the point of regard (DtPoR) is determined. Result: DtPoR can be measured continuously with clinically relevant accuracy to estimate near-work objectively. Study II: Near-work was measured with diary entries and compared with DtPoR estimations. Diversity of the dioptric landscape presented to the retina was assessed during near-work. Results: Objective and subjective measures of near-work were not found to highly correlate. Ecologically valid dioptric landscape during near-work decreases by up to -1.5 D towards the periphery of a 50˚ visual field. Study III: Choroid thickness changes were evaluated after exposure (approximately 30min) to a controlled, dioptrically diverse landscape with a global, sensitivity enhanced model. Result: No choroid thickness changes were found within the measuring field of approximately 45˚. Discussion The developed device could support future research to resolve disagreement between objective and subjective data of near-work and contribute to a better understanding of the ecological valid dioptric landscape. Proposed choroid layer thickness model might support short-term myopia-control research

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Ubiquitous computing and natural interfaces for environmental information

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas AmbientaisThe next computing revolution‘s objective is to embed every street, building, room and object with computational power. Ubiquitous computing (ubicomp) will allow every object to receive and transmit information, sense its surroundings and act accordingly, be located from anywhere in the world, connect every person. Everyone will have the possibility to access information, despite their age, computer knowledge, literacy or physical impairment. It will impact the world in a profound way, empowering mankind, improving the environment, but will also create new challenges that our society, economy, health and global environment will have to overcome. Negative impacts have to be identified and dealt with in advance. Despite these concerns, environmental studies have been mostly absent from discussions on the new paradigm. This thesis seeks to examine ubiquitous computing, its technological emergence, raise awareness towards future impacts and explore the design of new interfaces and rich interaction modes. Environmental information is approached as an area which may greatly benefit from ubicomp as a way to gather, treat and disseminate it, simultaneously complying with the Aarhus convention. In an educational context, new media are poised to revolutionize the way we perceive, learn and interact with environmental information. cUbiq is presented as a natural interface to access that information

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    corecore