18 research outputs found

    Three-dimensional point-cloud room model for room acoustics simulations

    Get PDF

    3D hand posture recognition using multicam

    Get PDF
    This paper presents the hand posture recognition in 3D using the MultiCam, a monocular 2D/3D camera developed by Center of Sensorsystems (ZESS). The :VlultiCam is a camera which is capable to provide high resolution of color data acquired from CMOS sensors and low resolution of distance (or range) data calculated based on timeof- flight (ToF) technology using Photonic Mixer Device (PMD) sensors. The availability of the distance data allows the hand posture to be recognized in z-axis direction without complex computational algorithms which also enables the program to work in real-time processing as well as eliminates the background effectively. The hand posture recognition will employ a simple but robust algorithm by checking the number of fingers detected around virtually created circle centered at the Center of Mass (CoM) of the hand and therefore classifies the class associated with a particular hand posture. At the end of this paper, the technique that uses intersection between the circle and fingers as the method to classify the hand posture which entails the MultiCam capability is proposed. This technique is able to solve the problem of orientation, size and distance invariants by utilizing the distance data

    Determination of the dynamic vehicle model parameters by means of computer vision

    Get PDF
    This study is devoted to determining the geometric, kinematic and dynamic characteristics of a vehicle. To this purpose, it is proposed to use a complex approach applying the models of deformable body mechanics for describing the oscillatory movements of a vehicle and the computer vision algorithms for processing a series of object images to determine the state parameters of a vehicle on the road. The model of the vehicle vertical oscillations is produced by means of the viscoelastic elements and the dry friction element that fully enough represent the behavior of the sprung masses. The introduced algorithms and models can be used as a part of a complex system for monitoring and controlling the road traffic. In addition, they can determine both the speed of the car and its dynamic parameters and the driving behavior of the individual drivers

    Virtual View Generation with a Hybrid Camera Array

    Get PDF
    Virtual view synthesis from an array of cameras has been an essential element of three-dimensional video broadcasting/conferencing. In this paper, we propose a scheme based on a hybrid camera array consisting of four regular video cameras and one time-of-flight depth camera. During rendering, we use the depth image from the depth camera as initialization, and compute a view-dependent scene geometry using constrained plane sweeping from the regular cameras. View-dependent texture mapping is then deployed to render the scene at the desired virtual viewpoint. Experimental results show that the addition of the time-of-flight depth camera greatly improves the rendering quality compared with an array of regular cameras with similar sparsity. In the application of 3D video boardcasting/conferencing, our hybrid camera system demonstrates great potential in reducing the amount of data for compression/streaming while maintaining high rendering quality

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Autonomes Fahren: Eine kritische Beurteilung der technischen Realisierbarkeit

    Get PDF
    In dieser kurzen Übersicht werde ich darstellen, warum in absehbarer Zukunft vollautonomes Fahren in ausreichender Qualität nicht technisch realisierbar sein wird. Entscheidend ist dabei, dass für eine nicht vernachlässigbare Anzahl an Situationen ein vergleichsweise gutes Szenenverständnis notwendig ist und wir beim Stand der Technik keine Idee haben, wie dieses Szenenverständnis realisierbar ist

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    corecore