6 research outputs found

    A depth camera motion analysis framework for tele-rehabilitation : motion capture and person-centric kinematics analysis

    Get PDF
    With increasing importance given to telerehabilitation, there is a growing need for accurate, low-cost, and portable motion capture systems that do not require specialist assessment venues. This paper proposes a novel framework for motion capture using only a single depth camera, which is portable and cost effective compared to most industry-standard optical systems, without compromising on accuracy. Novel signal processing and computer vision algorithms are proposed to determine motion patterns of interest from infrared and depth data. In order to demonstrate the proposed framework’s suitability for rehabilitation, we developed a gait analysis application that depends on the underlying motion capture sub-system. Each subject’s individual kinematics parameters, which are unique to that subject, are calculated and these are stored for monitoring individual progress of the clinical therapy. Experiments were conducted on 14 different subjects, 5 healthy and 9 stroke survivors. The results show very close agreement of the resulting relevant joint angles with a 12-camera based VICON system, a mean error of at most 1.75% in detecting gait events w.r.t the manually generated ground-truth, and signiïŹcant performance improvements in terms of accuracy and execution time compared to a previous Kinect-based system

    Real-time video-plus-depth content creation utilizing time-of-flight sensor - from capture to display

    Get PDF
    Recent developments in 3D camera technologies, display technologies and other related fields have been aiming to provide 3D experience for home user and establish services such as Three-Dimensional Television (3DTV) and Free-Viewpoint Television (FTV). Emerging multiview autostereoscopic displays do not require any eyewear and can be watched by multiple users at the same time, thus are very attractive for home environment usage. To provide a natural 3D impression, autostereoscopic 3D displays have been design to synthesize multi-perspective virtual views of a scene using Depth-Image-Based Rendering (DIBR) techniques. One key issue of DIBR is that scene depth information in a form of a depth map is required in order to synthesize virtual views. Acquiring this information is quite complex and challenging task and still an active research topic. In this thesis, the problem of dynamic 3D video content creation of real-world visual scenes is addressed. The work assumed data acquisition setting including Time-of-Flight (ToF) depth sensor and a single conventional video camera. The main objective of the work is to develop efficient algorithms for the stages of synchronous data acquisition, color and ToF data fusion, and final view-plus-depth frame formatting and rendering. The outcome of this thesis is a prototype 3DTV system capable for rendering live 3D video on a 3D autostereoscopic display. The presented system makes extensive use of the processing capabilities of modern Graphics Processing Units (GPUs) in order to achieve real-time processing rates while providing an acceptable visual quality. Furthermore, the issue of arbitrary view synthesis is investigated in the context of DIBR and a novel approach based on depth layering is proposed. The proposed approach is applicable for general virtual views synthesis, i.e. in terms of different camera parameters such as position, orientation, focal length and varying sensors spatial resolutions. The experimental results demonstrate real-time capability of the proposed method even for CPU-based implementations. It compares favorably to other view synthesis methods in terms of visual quality, while being more computationally efficient

    Robust 3D Surface Reconstruction from Light Fields

    Get PDF
    Light field data captures the intensity, as well as the direction of rays in 3D space, allowing to retrieve not only the 3D geometry information, but also the reflectance properties of the acquired scene. The main focus of this thesis is precise 3D geometry reconstruction from light fields, especially on scenes with specular objects. A new semi-global approach for 3D reconstruction from linear light fields is proposed. This method combines a modified version of the Progressive Probabilistic Hough Transform with local slope estimates to extract ori- entations, and consequently depth information, in epipolar plane images (EPIs). The resulting reconstructions achieve a higher accuracy than local methods, with a more precise localization of object boundaries, as well as preservation of fine details. In the second part of the thesis the proposed approach is extended to cir- cular light fields in order to determine the full 360° view of target objects. Additionally, circular light fields allow retrieving depth even from datasets acquired with telecentric lenses, a task which is not possible using a linearly moving camera. Experimental results on synthetic and real datasets demon- strate the quality and the robustness of the proposed algorithm, which pro- vides precise reconstructions even with highly specular objects. The quality of the final reconstruction opens up many possible application scenarios, such as precise 3D reconstruction for defect detection in industrial optical inspection, object scanning for heritage preservation, as well as depth segmentation for the movie industry

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. ZunĂ€chst prĂ€sentieren wir iterative Schemata, die sich gut fĂŒr solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der SchlĂŒssel fĂŒr eine vielseitige Methode, die gute Ergebnisse fĂŒr zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, BildentfĂ€rbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, prĂ€sentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenĂŒber starken BeleuchtungsĂ€nderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. ZusĂ€tzliches Wissen ĂŒber die Belichtungsreihe ermöglicht uns, die erste vollstĂ€ndig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final prĂ€sentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusĂ€tzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. ZunĂ€chst prĂ€sentieren wir iterative Schemata, die sich gut fĂŒr solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der SchlĂŒssel fĂŒr eine vielseitige Methode, die gute Ergebnisse fĂŒr zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, BildentfĂ€rbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, prĂ€sentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenĂŒber starken BeleuchtungsĂ€nderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. ZusĂ€tzliches Wissen ĂŒber die Belichtungsreihe ermöglicht uns, die erste vollstĂ€ndig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final prĂ€sentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusĂ€tzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst
    corecore