187 research outputs found

    A disocclusion replacement approach to subjective assessment for depth map quality evaluation

    Get PDF
    An inherent problem of Depth Image Based Rendering (DIBR) is the visual presence of disocclusions in the rendered views. This poses a significant challenge when the subjective assessment of these views is utilised for evaluating the quality of the depth maps used in the rendering process. Although various techniques are available to address this challenge, they result in concealing distortions, which are directly caused by the depth map imperfections. For the purposes of depth map quality evaluation, there is a need for an approach that deals with the presence of disocclusions without having further impact on other distortions. The aim of this approach is to enable the subjective assessments of rendered views to provide results, which are more representative of the quality of the depth map used in the rendering process

    A disocclusion replacement approach to subjective assessment for depth map quality evaluation

    Get PDF
    An inherent problem of Depth Image Based Rendering (DIBR) is the visual presence of disocclusions in the rendered views. This poses a significant challenge when the subjective assessment of these views is utilised for evaluating the quality of the depth maps used in the rendering process. Although various techniques are available to address this challenge, they result in concealing distortions, which are directly caused by the depth map imperfections. For the purposes of depth map quality evaluation, there is a need for an approach that deals with the presence of disocclusions without having further impact on other distortions. The aim of this approach is to enable the subjective assessments of rendered views to provide results, which are more representative of the quality of the depth map used in the rendering process

    Fusing spatial and temporal components for real-time depth data enhancement of dynamic scenes

    Get PDF
    The depth images from consumer depth cameras (e.g., structured-light/ToF devices) exhibit a substantial amount of artifacts (e.g., holes, flickering, ghosting) that needs to be removed for real-world applications. Existing methods cannot entirely remove them and perform slow. This thesis proposes a new real-time spatio-temporal depth image enhancement filter that completely removes flickering and ghosting, and significantly reduces holes. This thesis also presents a novel depth-data capture setup and two data reduction methods to optimize the performance of the proposed enhancement method

    Implementation of Depth Map Filtering on GPU

    Get PDF
    The thesis work was part of the Mobile 3DTV project which studied the capture, coding and transmission of 3D video representation formats in mobile delivery scenarios. The main focus of study was to determine if it was practical to transmit and view 3D videos on mobile devices. The chosen approach for virtual view synthesis was Depth Image Based Rendering (DIBR). The depth computed is often inaccurate, noisy, low in resolution or even inconsistent over a video sequence. Therefore, the sensed depth map has to be post-processed and refined through proper filtering. Bilateral filter was used for the iterative refinement process, using the information from one of the associated high quality texture (color) image (left or right view). The primary objective of this thesis was to perform the filtering operation in real-time. Therefore, we ported the algorithm to a GPU. As for the programming platform we chose OpenCL from the Khronos Group. The reason was that the platform is capable of programming on heterogeneous parallel computing environments, which means it is platform, vendor, or hardware independent. It was observed that the filtering algorithm was suitable for GPU implementation. This was because, even though every pixel used the information from its neighborhood window, processing for one pixel has no dependency on the results from its surrounding pixels. Thus, once the data for the neighborhood was loaded into the local memory of the multiprocessor, simultaneous processing for several pixels could be carried out by the device. The results obtained from our experiments were quite encouraging. We executed the MEX implementation on a Core2Duo CPU with 2 GB of RAM. On the other hand we used NVIDIA GeForce 240 as the GPU device, which comes with 96 cores, graphics clock of 550 MHz, processor clock of 1340 MHz and 512 MB memory. The processing speed improved significantly and the quality of the depth maps was at par with the same algorithm running on a CPU. In order to test the effect of our filtering algorithm on degraded depth map, we introduced artifacts by compressing it using H.264 encoder. The level of degradation was controlled by varying the quantization parameter. The blocky depth map was filtered separately using our implementation on GPU and then on CPU. The results showed improvement in speed up to 30 times, while obtaining refined depth maps with similar quality measure as the ones processed using the CPU implementation

    FULL-FIELD DAMAGE ASSESSMENT OF NOTCHED CFRP LAMINATES

    Get PDF
    The work presented in this thesis constitutes the first dedicated application of surface full-field experimental techniques to the comprehensive damage assessment of open-hole compression (OHC) in composite laminates, under both static and fatigue loading. The relevance of the work comes from OHC being one of the two main tests used in industry to measure damage tolerance of composite material systems. The main motivation for the work is the existence of a gap in the published literature pertaining to the location of the occurrence of different damage events during the life of notched composite structures. Additionally, the effect of toughening laminates by interleaving of particles, intended to improve the damage tolerance, was studied. As such, the main goal was to demonstrate the viability of using full-field non-contact experimental techniques to study the evolution of damage in notched carbon fibre reinforced polymer laminates. The specific techniques used were thermoelastic stress analysis (TSA) and digital image correlation (DIC). It was found that a characteristic damage sequence is independent of the material system and that final failure of the laminate is controlled by the development of crush zones at the east and west sides of the hole. These crush zones result from the collapse of kink bands whose development is in turn controlled by matrix cracking early in the life of the laminate. Hence, by characterizing the sequence of damage events and their occurrence in notched coupons, the design allowables of actual composite structures can be better approximated. Pertaining to the effect of particle interleaving, statistical analysis of life data demonstrated that it could not be concluded that this kind of toughening improves the OHC fatigue life of the laminates tested. The work presented in thesis thereby demonstrates that TSA and DIC can be applied to the study of damage in composite laminates and, thus, represents a significant step towards an improved understanding of damage morphology and evolution in heterogeneous materials

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world
    corecore