20 research outputs found

    A decision-level fusion strategy for multimodal ocular biometric in visible spectrum based on posterior probability

    Full text link
    © 2017 IEEE. In this work, we propose a posterior probability-based decision-level fusion strategy for multimodal ocular biometric in the visible spectrum employing iris, sclera and peri-ocular trait. To best of our knowledge this is the first attempt to design a multimodal ocular biometrics using all three ocular traits. Employing all these traits in combination can help to increase the reliability and universality of the system. For instance in some scenarios, the sclera and iris can be highly occluded or for completely closed eyes scenario, the peri-ocular trait can be relied on for the decision. The proposed system is constituted of three independent traits and their combinations. The classification output of the trait which produces highest posterior probability is to consider as the final decision. An appreciable reliability and universal applicability of ocular trait are achieved in experiments conducted employing the proposed scheme

    SMART WEARABLES: ADVANCING MYOPIA RESEARCH THROUGH QUANTIFICATION OF THE VISUAL ENVIRONMENT

    Full text link
    Myopia development has been attributed to eyeball elongation, but its driving force is not fully understood. Previous research suggests lack of time spent outdoors with exposure to high light levels or time spent on near-work as potential environmental risk factors. Although light levels are quantifiable with wearables, near-work relies solely on questionnaires for data collection and there remains a risk of subjective bias. Studies spanning decades identified that eye growth is optically guided. This proposal received further support from recent findings of larger changes in the thickness of the eye’s choroidal layer after short-term optical interventions compared with daily eye-length changes attributed to myopia. Most of these studies used a monocular optical appliance to manipulate potential myogenic factors, which may introduce confounders by disrupting the natural functionality of the visual system. This thesis reports on improvements in systems for characterising the visual dioptric space and its application to myopia studies. Understanding the driving forces of myopia will prevent related vision loss. Study I: An eye-tracker was developed and validated that incorporated time-of-flight (ToF) technology to obtain spatial information of the wearer’s field of view. By matching gaze data with point cloud data, the distance to the point of regard (DtPoR) is determined. Result: DtPoR can be measured continuously with clinically relevant accuracy to estimate near-work objectively. Study II: Near-work was measured with diary entries and compared with DtPoR estimations. Diversity of the dioptric landscape presented to the retina was assessed during near-work. Results: Objective and subjective measures of near-work were not found to highly correlate. Ecologically valid dioptric landscape during near-work decreases by up to -1.5 D towards the periphery of a 50˚ visual field. Study III: Choroid thickness changes were evaluated after exposure (approximately 30min) to a controlled, dioptrically diverse landscape with a global, sensitivity enhanced model. Result: No choroid thickness changes were found within the measuring field of approximately 45˚. Discussion The developed device could support future research to resolve disagreement between objective and subjective data of near-work and contribute to a better understanding of the ecological valid dioptric landscape. Proposed choroid layer thickness model might support short-term myopia-control research

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    A Retro-Projected Robotic Head for Social Human-Robot Interaction

    Get PDF
    As people respond strongly to faces and facial features, both con- sciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an orig- inal contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving soft- ware designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial ex- pressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncon- trolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evalua- tions present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face

    Faces and hands : modeling and animating anatomical and photorealistic models with regard to the communicative competence of virtual humans

    Get PDF
    In order to be believable, virtual human characters must be able to communicate in a human-like fashion realistically. This dissertation contributes to improving and automating several aspects of virtual conversations. We have proposed techniques to add non-verbal speech-related facial expressions to audiovisual speech, such as head nods for of emphasis. During conversation, humans experience shades of emotions much more frequently than the strong Ekmanian basic emotions. This prompted us to develop a method that interpolates between facial expressions of emotions to create new ones based on an emotion model. In the area of facial modeling, we have presented a system to generate plausible 3D face models from vague mental images. It makes use of a morphable model of faces and exploits correlations among facial features. The hands also play a major role in human communication. Since the basis for every realistic animation of gestures must be a convincing model of the hand, we devised a physics-based anatomical hand model, where a hybrid muscle model drives the animations. The model was used to visualize complex hand movement captured using multi-exposure photography.Um ĂŒberzeugend zu wirken, mĂŒssen virtuelle Figuren auf dieselbe Art wie lebende Menschen kommunizieren können. Diese Dissertation hat das Ziel, verschiedene Aspekte virtueller Unterhaltungen zu verbessern und zu automatisieren. Wir fĂŒhrten eine Technik ein, die es erlaubt, audiovisuelle Sprache durch nichtverbale sprachbezogene GesichtsausdrĂŒcke zu bereichern, wie z.B. Kopfnicken zur Betonung. WĂ€hrend einer Unterhaltung empfinden Menschen weitaus öfter Emotionsnuancen als die ausgeprĂ€gten Ekmanschen Basisemotionen. Dies bewog uns, eine Methode zu entwickeln, die GesichtsausdrĂŒcke fĂŒr neue Emotionen erzeugt, indem sie, ausgehend von einem Emotionsmodell, zwischen bereits bekannten GesichtsausdrĂŒcken interpoliert. Auf dem Gebiet der Gesichtsmodellierung stellten wir ein System vor, um plausible 3D-Gesichtsmodelle aus vagen geistigen Bildern zu erzeugen. Dieses System basiert auf einem Morphable Model von Gesichtern und nutzt Korrelationen zwischen GesichtszĂŒgen aus. Auch die HĂ€nde spielen ein große Rolle in der menschlichen Kommunikation. Da der Ausgangspunkt fĂŒr jede realistische Animation von Gestik ein ĂŒberzeugendes Handmodell sein muß, entwikkelten wir ein physikbasiertes anatomisches Handmodell, bei dem ein hybrides Muskelmodell die Animationen antreibt. Das Modell wurde verwendet, um komplexe Handbewegungen zu visualisieren, die aus mehrfach belichteten Photographien extrahiert worden waren
    corecore