2,184 research outputs found

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach

    Probabilistic Approach to Robust Wearable Gaze Tracking

    Get PDF
    Creative Commons Attribution License (CC BY 4.0)This paper presents a method for computing the gaze point using camera data captured with a wearable gaze tracking device. The method utilizes a physical model of the human eye, ad- vanced Bayesian computer vision algorithms, and Kalman filtering, resulting in high accuracy and low noise. Our C++ implementation can process camera streams with 30 frames per second in realtime. The performance of the system is validated in an exhaustive experimental setup with 19 participants, using a self-made device. Due to the used eye model and binocular cam- eras, the system is accurate for all distances and invariant to device movement. We also test our system against a best-in-class commercial device which is outperformed for spatial accuracy and precision. The software and hardware instructions as well as the experimental data are pub- lished as open source.Peer reviewe

    PMAS: The Potsdam Multi Aperture Spectrophotometer. II. The Wide Integral Field Unit PPak

    Full text link
    PPak is a new fiber-based Integral Field Unit (IFU), developed at the Astrophysical Institute Potsdam, implemented as a module into the existing PMAS spectrograph. The purpose of PPak is to provide both an extended field-of-view with a large light collecting power for each spatial element, as well as an adequate spectral resolution. The PPak system consists of a fiber bundle with 331 object, 36 sky and 15 calibration fibers. The object and sky fibers collect the light from the focal plane behind a focal reducer lens. The object fibers of PPak, each 2.7 arcseconds in diameter, provide a contiguous hexagonal field-of-view of 74 times 64 arcseconds on the sky, with a filling factor of 60%. The operational wavelength range is from 400 to 900nm. The PPak-IFU, together with the PMAS spectrograph, are intended for the study of extended, low surface brightness objects, offering an optimization of total light-collecting power and spectral resolution. This paper describes the instrument design, the assembly, integration and tests, the commissioning and operational procedures, and presents the measured performance at the telescope.Comment: 14 pages, 21 figures, accepted at PAS

    Infrared tracking system for immersive virtual environments

    Get PDF
    In this paper, we describe the theoretical foundations and engineering approach of an infrared-optical tracking system specially design for large scale immersive virtual environments (VE) or augmented reality (AR) settings. The system described is capable of tracking independent retro-reflective markers arranged in a 3D structure (ar-tefact) in real time (25Hz), recovering all possible 6 Degrees of Freedom (DOF). These artefacts can be ad-justed to the user’s stereo glasses to track his/her pose while immersed in the VE or AR, or can be used as a 3D input device. The hardware configuration consists in 4 shutter-synchronized cameras attached with band-pass infrared filters and the artefacts are illuminated by infrared array-emitters. The system was specially designed to fit a room with sizes of 5.7m x 2.7m x 3.4 m, which match the dimensions of the CAVE-Hollowspace of Lousal where the system will be deployed. Pilot lab results have shown a latency of 40ms in tracking the pose of two ar-tefacts with 4 infrared markers, achieving a frame-rate of 24.80 fps and showing a mean accuracy of 0.93mm/0.52º and a mean precision of 0.08mm/0.04º, respectively, in overall translation/rotation DOFs, fulfill-ing the system requirements initially defined.info:eu-repo/semantics/publishedVersio

    J Fluorescence

    Get PDF
    The scope of this paper is to illustrate the need for an improved quality assurance in fluorometry. For this purpose, instrumental sources of error and their influences on the reliability and comparability of fluorescence data are highlighted for frequently used photoluminescence techniques ranging from conventional macro- and microfluorometry over fluorescence microscopy and flow cytometry to microarray technology as well as in vivo fluorescence imaging. Particularly, the need for and requirements on fluorescence standards for the characterization and performance validation of fluorescence instruments, to enhance the comparability of fluorescence data, and to enable quantitative fluorescence analysis are discussed. Special emphasis is dedicated to spectral fluorescence standards and fluorescence intensity standards

    Baseline and triangulation geometry in a standard plenoptic camera

    Get PDF
    In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. The advancement of micro lenses and image sensors enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or to specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model’s accuracy with deviations of less than 0:33 % for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model

    Immunochromatographic diagnostic test analysis using Google Glass.

    Get PDF
    We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health

    Hybrid Video/Optical See-Through HMD

    Get PDF
    An old but still ongoing subject of debate among augmented reality (AR) experts is about which see-through paradigm is best in wearable AR displays. Video see-through (VST) and optical see-through (OST) paradigms have both their own strengths and shortcomings with respect to technological and human-factor aspects. The major difference between these see-through paradigms is in providing an aided (VST) or unaided (OST) view of the real world. In this work, we present a novel approach for the development of AR stereoscopic head-mounted displays (HMDs) that can provide both the see-through mechanisms. Our idea is to dynamically modify the transparency of the display through a liquid crystal (LC)-based electro-optical shutter applied on the top of a standard OST device opportunely modified for housing a pair of external cameras. A plane-induced homography transformation is used for consistently warping the video images, hence reducing the parallax between cameras and displays. An externally applied drive voltage is used for smoothly controlling the light transmittance of the LC shutters so as to allow an easy transition between the unaided and the camera-mediated view of the real scene. Our tests have proven the efficacy of the proposed solution under worst-case lighting conditions
    • …
    corecore