4 research outputs found

    Wireless capsule gastrointestinal endoscopy: direction of arrival estimation based localization survey

    Get PDF
    One of the significant challenges in Capsule Endoscopy (CE) is to precisely determine the pathologies location. The localization process is primarily estimated using the received signal strength from sensors in the capsule system through its movement in the gastrointestinal (GI) tract. Consequently, the wireless capsule endoscope (WCE) system requires improvement to handle the lack of the capsule instantaneous localization information and to solve the relatively low transmission data rate challenges. Furthermore, the association between the capsule’s transmitter position, capsule location, signal reduction and the capsule direction should be assessed. These measurements deliver significant information for the instantaneous capsule localization systems based on TOA (time of arrival) approach, PDOA (phase difference of arrival), RSS (received signal strength), electromagnetic, DOA (direction of arrival) and video tracking approaches are developed to locate the WCE precisely. The current article introduces the acquisition concept of the GI medical images using the endoscopy with a comprehensive description of the endoscopy system components. Capsule localization and tracking are considered to be the most important features of the WCE system, thus the current article emphasizes the most common localization systems generally, highlighting the DOA-based localization systems and discusses the required significant research challenges to be addressed

    Efficient Nonlinear Dimensionality Reduction for Pixel-wise Classification of Hyperspectral Imagery

    Get PDF
    Classification, target detection, and compression are all important tasks in analyzing hyperspectral imagery (HSI). Because of the high dimensionality of HSI, it is often useful to identify low-dimensional representations of HSI data that can be used to make analysis tasks tractable. Traditional linear dimensionality reduction (DR) methods are not adequate due to the nonlinear distribution of HSI data. Many nonlinear DR methods, which are successful in the general data processing domain, such as Local Linear Embedding (LLE) [1], Isometric Feature Mapping (ISOMAP) [2] and Kernel Principal Components Analysis (KPCA) [3], run very slowly and require large amounts of memory when applied to HSI. For example, applying KPCA to the 512×217 pixel, 204-band Salinas image using a modern desktop computer (AMD FX-6300 Six-Core Processor, 32 GB memory) requires more than 5 days of computing time and 28GB memory! In this thesis, we propose two different algorithms for significantly improving the computational efficiency of nonlinear DR without adversely affecting the performance of classification task: Simple Linear Iterative Clustering (SLIC) superpixels and semi-supervised deep autoencoder networks (SSDAN). SLIC is a very popular algorithm developed for computing superpixels in RGB images that can easily be extended to HSI. Each superpixel includes hundreds or thousands of pixels based on spatial and spectral similarities and is represented by the mean spectrum and spatial position of all of its component pixels. Since the number of superpixels is much smaller than the number of pixels in the image, they can be used as input for nonlinearDR, which significantly reduces the required computation time and memory versus providing all of the original pixels as input. After nonlinear DR is performed using superpixels as input, an interpolation step can be used to obtain the embedding of each original image pixel in the low dimensional space. To illustrate the power of using superpixels in an HSI classification pipeline,we conduct experiments on three widely used and publicly available hyperspectral images: Indian Pines, Salinas and Pavia. The experimental results for all three images demonstrate that for moderately sized superpixels, the overall accuracy of classification using superpixel-based nonlinear DR matches and sometimes exceeds the overall accuracy of classification using pixel-based nonlinear DR, with a computational speed that is two-three orders of magnitude faster. Even though superpixel-based nonlinear DR shows promise for HSI classification, it does have disadvantages. First, it is costly to perform out-of-sample extensions. Second, it does not generalize to handle other types of data that might not have spatial information. Third, the original input pixels cannot approximately be recovered, as is possible in many DR algorithms.In order to overcome these difficulties, a new autoencoder network - SSDAN is proposed. It is a fully-connected semi-supervised autoencoder network that performs nonlinear DR in a manner that enables class information to be integrated. Features learned from SSDAN will be similar to those computed via traditional nonlinear DR, and features from the same class will be close to each other. Once the network is trained well with training data, test data can be easily mapped to the low dimensional embedding. Any kind of data can be used to train a SSDAN,and the decoder portion of the SSDAN can easily recover the initial input with reasonable loss.Experimental results on pixel-based classification in the Indian Pines, Salinas and Pavia images show that SSDANs can approximate the overall accuracy of nonlinear DR while significantly improving computational efficiency. We also show that transfer learning can be use to finetune features of a trained SSDAN for a new HSI dataset. Finally, experimental results on HSI compression show a trade-off between Overall Accuracy (OA) of extracted features and PeakSignal to Noise Ratio (PSNR) of the reconstructed image

    Depth-Map-Assisted Texture and Depth Map Super-Resolution

    Get PDF
    With the development of video technology, high definition video and 3D video applications are becoming increasingly accessible to customers. The interactive and vivid 3D video experience of realistic scenes relies greatly on the amount and quality of the texture and depth map data. However, due to the limitations of video capturing hardware and transmission bandwidth, transmitted video has to be compressed which degrades, in general, the received video quality. This means that it is hard to meet the users’ requirements of high definition and visual experience; it also limits development of future applications. Therefore, image/video super-resolution techniques have been proposed to address this issue. Image super-resolution aims to reconstruct a high resolution image from single or multiple low resolution images captured of the same scene under different conditions. Based on the image type that needs to be super-resolved, image super-resolution includes texture and depth image super-resolutions. If classified based on the implementation methods, there are three main categories: interpolation-based, reconstruction-based and learning-based super-resolution algorithms. This thesis focuses on exploiting depth data in interpolation-based super-resolution algorithms for texture video and depth maps. Two novel texture and one depth super-resolution algorithms are proposed as the main contributions of this thesis. The first texture super-resolution algorithm is carried out in the Mixed Resolution (MR) multiview video system where at least one of the views is captured at Low Resolution (LR), while the others are captured at Full Resolution (FR). In order to reduce visual uncomfortableness and adapt MR video format for free-viewpoint television, the low resolution views are super-resolved to the target full resolution by the proposed virtual view assisted super resolution algorithm. The inter-view similarity is used to determine whether to fill the missing pixels in the super-resolved frame by virtual view pixels or by spatial interpolated pixels. The decision mechanism is steered by the texture characteristics of the neighbors of each missing pixel. Thus, the proposed method can recover the details in regions with edges while maintaining good quality at smooth areas by properly exploiting the high quality virtual view pixels and the directional correlation of pixels. The second texture super-resolution algorithm is based on the Multiview Video plus Depth (MVD) system, which consists of textures and the associated per-pixel depth data. In order to further reduce the transmitted data and the quality degradation of received video, a systematical framework to downsample the original MVD data and later on to super-resolved the LR views is proposed. At the encoder side, the rows of the two adjacent views are downsampled following an interlacing and complementary fashion, whereas, at the decoder side, the discarded pixels are recovered by fusing the virtual view pixels with the directional interpolated pixels from the complementary downsampled views. Consequently, with the assistance of virtual views, the proposed approach can effectively achieve these two goals. From previous two works, we can observe that depth data has big potential to be used in 3D video enhancement. However, due to the low spatial resolution of Time-of-Flight (ToF) depth camera generated depth images, their applications have been limited. Hence, in the last contribution of this thesis, a planar-surface-based depth map super-resolution approach is presented, which interpolates depth images by exploiting the equation of each detected planar surface. Both quantitative and qualitative experimental results demonstrate the effectiveness and robustness of the proposed approach over benchmark methods
    corecore