10 research outputs found

    Specialized depth extraction for live soccer

    Get PDF
    sheets presentatio

    Electrohysterographic conduction velocity estimation

    Get PDF
    Monitoring and analysis of the fetal-heart and the uterine-muscle activity, referred to as electrohysterogram (EHG), is essential to permit timely treatment during pregnancy. While remarkable progress is reported for monitoring of the fetal cardiac activity, the EHG measurement and interpretation remains challenging, and limited knowledge is available on the underlying physiological processes. In particular, little attention has been paid to the analysis of the EHG propagation, whose characteristics might indicate the presence of coordinated uterine contractions leading to intrauterine pressure increase. Therefore, this study focuses for the first time on the noninvasive estimation of the conduction velocity of EHG action potentials by means of multichannel EHG recording and surface high-density electrodes. A maximum likelihood algorithm, initially proposed for skeletal-muscle electromyography, is modified for the required EHG analysis. The use of clustering and weighting is introduced to deal with poor signal similarity between different channels. The presented methods were evaluated by specific simulations, proving the combination of weighting and clustering to be the most accurate method. A preliminary EHG measurement during labor confirmed the feasibility of the method. An extensive clinical validation will however be necessary to optimize the method and assess the relevance of the EHG conduction velocity for pregnancy monitoring

    Efficient enhancement and extraction of depth for 3-D video

    Get PDF

    Efficient and stable sparse-to- dense conversion for automatic 2-D to 3-D conversion

    No full text
    Various important 3D depth cues such as focus, motion, occlusion and disparity, can only be estimated reliably at distinct sparse image locations like edges and corners. Hence for 2D-to-3D video conversion, a stable and smooth sparse-todense conversion is required to propagate these sparse estimates to the complete video. To this end optimization, segmentation, and triangulation based approaches have been proposed recently. While optimization based approaches produce accurate dense maps, the resulting energy functions are very hard to minimize within the stringent requirements of real-time video processing. In addition segmentation and triangulation based approaches can cause incorrect delineation of object boundaries. Finally, dense maps that are independently estimated from video images suffer from temporal instabilities. To deal with the real-time issue, we propose an innovative low latency, line scanning based sparse-to-dense conversion algorithm with a low computational complexity. To mitigate the stability and smoothness issues, we additionally propose a recursive spatio-temporal post processing and an efficient joint bilateral up-sampling method. We illustrate the performance of the resulting sparse-to-dense converter on dense defocus maps. We also show a subjective assessment of 2D to 3D conversion results using a paired comparison on a variety of challenging low-depth-of-field test sequences. The results demonstrate that the proposed approach achieves equal 3D depth and video quality as state-of-the-art sparse-to-dense converters with a significantly reduced computational complexity and memory usage

    Real-time robust background subtraction under rapidly changing illumination conditions.

    No full text
    Fast robust background subtraction under sudden lighting changes is a challenging problem in many applications. In this paper, we propose a real-time approach, which combines the Eigenbackground and Statistical Illumination method to address this issue. The first algorithm is used to reconstruct the background frame, while the latter improves the foreground segmentation. In addition, we introduce an online spatial likelihood model by detecting reliable background pixels. Extensive quantitative experiments illustrate our approach consistently achieves significantly higher precision at high recall rates, compared to several state-of-the-art illumination invariant background subtraction methods

    Overview of efficient high-quality state-of-the-art depth enhancement methods by thorough design space exploration

    Get PDF
    High-quality 3D content generation requires high-quality depth maps. In practice, depth maps generated by stereo-matching, depth sensing cameras, or decoders, have low resolution and suffer from unreliable estimates and noise. Therefore, depth enhancement is necessary. Depth enhancement comprises two stages: depth upsampling and temporal post-processing. In this paper, we extend our previous work on depth upsampling in two ways. First we propose PWAS-MCM, a new depth upsampling method, and we show that it achieves on average the highest depth accuracy compared to other efficient state-of-the-art depth upsampling methods. Then, we benchmark all relevant state-of-the-art filter-based temporal post-processing methods on depth accuracy by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Then we analyze the temporal post-processing methods qualitatively. Finally, we analyze the computational complexity of each depth upsampling and temporal post-processing method by measuring the throughput and hardware utilization of the GPU implementation that we built for each method

    Evaluation of efficient high quality depth upsampling methods for 3DTV

    No full text
    High quality 3D content generation requires high quality depth maps. In practice, depth maps generated by stereo-matching, depth sensingcameras, or decoders, have a low resolution and suffer from unreliable estimates and noise. Therefore depth post-processing is necessary. In this paper we benchmark state-of-the-art filter based depth upsampling methods on depth accuracy and interpolation quality by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Additionally, we analyze each method’s computational complexity with the big O notation and we measure the runtime of the GPU implementation that we built for each method. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    Specialized depth extraction for live soccer

    No full text
    sheets presentatio

    Continuous-Spectrum Infrared Illuminator for Camera-PPG in Darkness

    No full text
    Many camera-based remote photoplethysmography (PPG) applications require sensing in near infrared (NIR). The performance of PPG systems benefits from multi-wavelength processing. The illumination source in such system is explored in this paper. We demonstrate that multiple narrow-band LEDs have inferior color homogeneity compared to broadband light sources. Therefore, we consider the broadband option based on phosphor material excited by LEDs. A first prototype was realized and its details are discussed. It was tested within a remote-PPG monitoring scenario in darkness and the full system demonstrates robust pulse-rate measurement. Given its accuracy in pulse rate extraction, the proposed illumination principle is considered a valuable asset for large-scale NIR-PPG applications as it enables multi-wavelength processing, lightweight set-ups with relatively low-power infrared light sources
    corecore