195 research outputs found

    3D video coding and transmission

    Get PDF
    The capture, transmission, and display of 3D content has gained a lot of attention in the last few years. 3D multimedia content is no longer con fined to cinema theatres but is being transmitted using stereoscopic video over satellite, shared on Blu-RayTMdisks, or sent over Internet technologies. Stereoscopic displays are needed at the receiving end and the viewer needs to wear special glasses to present the two versions of the video to the human vision system that then generates the 3D illusion. To be more e ffective and improve the immersive experience, more views are acquired from a larger number of cameras and presented on di fferent displays, such as autostereoscopic and light field displays. These multiple views, combined with depth data, also allow enhanced user experiences and new forms of interaction with the 3D content from virtual viewpoints. This type of audiovisual information is represented by a huge amount of data that needs to be compressed and transmitted over bandwidth-limited channels. Part of the COST Action IC1105 \3D Content Creation, Coding and Transmission over Future Media Networks" (3DConTourNet) focuses on this research challenge.peer-reviewe

    3D multiple description coding for error resilience over wireless networks

    Get PDF
    Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.EThOS - Electronic Theses Online ServicePetroleum Technology Development Fund (PTDF)GBUnited Kingdo

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Information embedding and retrieval in 3D printed objects

    Get PDF
    Deep learning and convolutional neural networks have become the main tools of computer vision. These techniques are good at using supervised learning to learn complex representations from data. In particular, under limited settings, the image recognition model now performs better than the human baseline. However, computer vision science aims to build machines that can see. It requires the model to be able to extract more valuable information from images and videos than recognition. Generally, it is much more challenging to apply these deep learning models from recognition to other problems in computer vision. This thesis presents end-to-end deep learning architectures for a new computer vision field: watermark retrieval from 3D printed objects. As it is a new area, there is no state-of-the-art on many challenging benchmarks. Hence, we first define the problems and introduce the traditional approach, Local Binary Pattern method, to set our baseline for further study. Our neural networks seem useful but straightfor- ward, which outperform traditional approaches. What is more, these networks have good generalization. However, because our research field is new, the problems we face are not only various unpredictable parameters but also limited and low-quality training data. To address this, we make two observations: (i) we do not need to learn everything from scratch, we know a lot about the image segmentation area, and (ii) we cannot know everything from data, our models should be aware what key features they should learn. This thesis explores these ideas and even explore more. We show how to use end-to-end deep learning models to learn to retrieve watermark bumps and tackle covariates from a few training images data. Secondly, we introduce ideas from synthetic image data and domain randomization to augment training data and understand various covariates that may affect retrieve real-world 3D watermark bumps. We also show how the illumination in synthetic images data to effect and even improve retrieval accuracy for real-world recognization applications

    Cooperative systems based signal processing techniques with applications to three-dimensional video transmission

    Get PDF
    Three-dimensional (3-D) video has recently emerged to offer an immersive multimedia experience that can not be offered by two-dimensional (2-D) video applications. Currently, both industry and academia are focused on delivering 3-D video services to wireless communication systems. Modern video communication systems currently adopt cooperative communication and orthogonal frequency division multiplexing (OFDM) as they are an attractive solution to combat fading in wireless communication systems and achieve high data-rates. However, this strong motivation to transmit the video signals over wireless systems faces many challenges. These are mainly channel bandwidth limitations, variations of signal-to-noise ratio (SNR) in wireless channels, and the impairments in the physical layer such as time varying phase noise (PHN), and carrier frequency offset (CFO). In response to these challenges, this thesis seeks to develop efficient 3-D video transmission methods and signal processing algorithms that can overcome the effects of error-prone wireless channels and impairments in the physical layer. In the first part of the thesis, an efficient unequal error protection (UEP) scheme, called video packet partitioning, and a new 3-D video transceiver structure are proposed. The proposed video transceiver uses switching operations between various UEP schemes based on the packet partitioning to achieve a trade- off between system complexity and performance. Experimental results show that the proposed system achieves significantly high video quality at different SNRs with the lowest possible bandwidth and system complexity compared to direct transmission schemes. The second part of the thesis proposes a new approach to joint source-channel coding (JSCC) that simultaneously assigns source code rates, the number of high and low priority packets, and channel code rates for the application, network, and physical layers, respectively. The proposed JSCC algorithm takes into account the rate budget constraint and the available instantaneous SNR of the best relay selection in cooperative systems. Experimental results show that the proposed JSCC algorithm outperforms existing algorithms in terms of peak signal-to-noise ratio (PSNR). In the third part of the thesis, a computationally efficient training based approach for joint channel, CFO, and PHN estimation in OFDM systems is pro- posed. The proposed estimator is based on an expectation conditional maximization (ECM) algorithm. To compare the estimation accuracy of the proposed estimator, the hybrid Cram´er-Rao lower bound (HCRB) of hybrid parameters of interest is derived. Next, to detect the signal in the presence of PHN, an iterative receiver based on the extended Kalman filter (EKF) for joint data detection and PHN mitigation is proposed. It is demonstrated by numerical simulations that, compared to existing algorithms, the performance of the proposed ECM-based estimator in terms of the mean square error (MSE) is closer to the derived HCRB and outperforms the existing estimation algorithms at moderate-to-high SNRs. Finally, this study extends the research on joint channel, PHN, and CFO estimation one step forward from OFDM systems to cooperative OFDM systems. An iterative algorithm based on the ECM in cooperative OFDM networks in the presence of unknown channel gains, PHNs and CFOs is applied. Moreover, the HCRB for the joint estimation problem in both decode-and-forward (DF) and amplify-and-forward (AF) relay systems is presented. An iterative algorithm based on the EKF for data detection and tracking the unknown time-varying PHN throughout the OFDM data packet is also used. For more efficient 3-D video transmission, the estimation algorithms and UEP schemes based packet portioning were combined to achieve a more robust video bit stream in the presence of PHNs. Applying this combination, simulation results demonstrate that promising bit-error-rate (BER) and PSNR performance can be achieved at the destination at different SNRs and PHN variance. The proposed schemes and algorithms offer solutions for existing problems in the techniques for applications to 3-D video transmission

    Integrated navigation and visualisation for skull base surgery

    Get PDF
    Skull base surgery involves the management of tumours located on the underside of the brain and the base of the skull. Skull base tumours are intricately associated with several critical neurovascular structures making surgery challenging and high risk. Vestibular schwannoma (VS) is a benign nerve sheath tumour arising from one of the vestibular nerves and is the commonest pathology encountered in skull base surgery. The goal of modern VS surgery is maximal tumour removal whilst preserving neurological function and maintaining quality of life but despite advanced neurosurgical techniques, facial nerve paralysis remains a potentially devastating complication of this surgery. This thesis describes the development and integration of various advanced navigation and visualisation techniques to increase the precision and accuracy of skull base surgery. A novel Diffusion Magnetic Resonance Imaging (dMRI) acquisition and processing protocol for imaging the facial nerve in patients with VS was developed to improve delineation of facial nerve preoperatively. An automated Artificial Intelligence (AI)-based framework was developed to segment VS from MRI scans. A user-friendly navigation system capable of integrating dMRI and tractography of the facial nerve, 3D tumour segmentation and intraoperative 3D ultrasound was developed and validated using an anatomically-realistic acoustic phantom model of a head including the skull, brain and VS. The optical properties of five types of human brain tumour (meningioma, pituitary adenoma, schwannoma, low- and high-grade glioma) and nine different types of healthy brain tissue were examined across a wavelength spectrum of 400 nm to 800 nm in order to inform the development of an Intraoperative Hypserpectral Imaging (iHSI) system. Finally, functional and technical requirements of an iHSI were established and a prototype system was developed and tested in a first-in-patient study
    • …
    corecore