64 research outputs found
Recommended from our members
Panoramic Video Stitching
Digital camera and smartphone technologies have made high quality images and video pervasive and abundant. Combining or stitching collections of images from a variety of viewpoints into an extended panoramic image is a common and popular function for such devices. Extending this functionality to video however, poses many new challenges due to the demand for both spatial and temporal continuity. Multi-view video stitching (also called panoramic video stitching) is an emerging, common research area in computer vision, image/video processing and computer graphics and has wide applications in virtual reality, virtual tourism, surveillance, and human computer interaction. In this thesis, I will explore the technical and practical problems in the complete process of stitching a high-resolution multiview video into a high-resolution panoramic video. The challenges addressed include video stabilization, efficient multi-view video alignment and panoramic video stitching, color correction, and blurred frame detection and repair.
Specifically, I propose a continuity aware Kalman filtering scheme for rotation angles for video stabilization and jitter removal. For efficient stitching of long, high-resolution panoramic videos, I propose constrained and multigrid SIFT matching schemes, concatenated image projection and warping and min-space feathering. These three approaches together can greatly reduce the computational time and memory requirement in panoramic video stitching, which makes it feasible to stitch high-resolution (e.g., 1920x1080 pixels) and long panoramic video sequences using standard workstations.
Color correction is the emphasis of my research. On this topic I first performed a systematic survey and performance evaluation of nine state of the art color correction approaches in the context of two-view image stitching. My evaluation work not only gives useful insights and conclusions about the relative performance of these approaches, but also points out the remaining challenges and possible directions for future color correction research. Based on the conclusions from this evaluation work, I proposed a hybrid and scalable color correction approach for general n-view image stitching, and designed a two-view video color correction approach for panoramic video stitching.
For blurred frame detection and repair, I have completed preliminary work on image partial blur detection and classification, in which I proposed a SVM-based blur block classifier using improved and new local blur features. Then, based on partial blur classification results, I designed a statistical thresholding scheme for blurred frame identification. For the detected blurred frames, I repaired them using polynomial data fitting from neighboring unblurred frames.
Many of the techniques and ideas in this thesis are novel and general solutions to the technical or practical problems in panoramic video stitching. At the end of this thesis, I conclude the contributions made by this thesis to the research and popularization of panoramic video stitching, and describe those open research issues
melNET: A Deep Learning Based Model For Melanoma Detection
Melanoma is identified as the deadliest in the skin cancer category. However, early-stage detection may enhance the treatment result. In this research, a deep learning-based model, named “melNET”, has been developed to detect melanoma in both dermoscopic and digital images. melNET uses the Inception-v3 architecture to handle the deep learning part. To ensure quality optimization, the architectural aspects of Inception-v3 were designed using the Hebbian principle as well as taking the intuition of multi-scale processing. This architecture takes advantage of parallel computing across multiple GPUs to employ RMSprop as the optimizer. While going through the training phase, melNET uses the back-propagation method to retrain this Inception-v3 network by feeding the errors from each iteration, resulting in the fine-tuning of network weights. After the completion of the training step, melNET can be used to predict the diagnosis of a mole by taking the lesion image as an input to the system. With a dermoscopic dataset of 200 images, provided by PH2, melNET outperforms the work with YOLO-v2 network by improving the sensitivity value from 86.35% to 97.50%. Also, the specificity and accuracy values are found to be improved from 85.90% to 87.50%, and, from 86.00% to 89.50% respectively. melNET has also been evaluated on a digital dataset of 170 images, provided by UMCG, showing an accuracy of 84.71%, which outperforms the 81.00% accuracy of the MED-NODE model. In both cases, melNET got treated as a binary classifier and a five-fold cross validation method was applied for the evaluation. In addition, melNET has been found to perform the detections in real-time by leveraging the end-to-end Inception-v3 architecture
No-reference depth map quality evaluation model based on depth map edge confidence measurement in immersive video applications
When it comes to evaluating perceptual quality of digital media for overall quality of
experience assessment in immersive video applications, typically two main approaches stand out:
Subjective and objective quality evaluation. On one hand, subjective quality evaluation offers the
best representation of perceived video quality assessed by the real viewers. On the other hand, it
consumes a significant amount of time and effort, due to the involvement of real users with lengthy
and laborious assessment procedures. Thus, it is essential that an objective quality evaluation model
is developed. The speed-up advantage offered by an objective quality evaluation model, which can
predict the quality of rendered virtual views based on the depth maps used in the rendering process,
allows for faster quality assessments for immersive video applications. This is particularly
important given the lack of a suitable reference or ground truth for comparing the available depth
maps, especially when live content services are offered in those applications. This paper presents a
no-reference depth map quality evaluation model based on a proposed depth map edge confidence
measurement technique to assist with accurately estimating the quality of rendered (virtual) views
in immersive multi-view video content. The model is applied for depth image-based rendering in
multi-view video format, providing comparable evaluation results to those existing in the literature,
and often exceeding their performance
On the popularization of digital close-range photogrammetry: a handbook for new users.
Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Γεωπληροφορική
Learning from small and imbalanced dataset of images using generative adversarial neural networks.
The performance of deep learning models is unmatched by any other approach in supervised computer vision tasks such as image classification. However, training these models requires a lot of labeled data, which are not always available. Labelling a massive dataset is largely a manual and very demanding process. Thus, this problem has led to the development of techniques that bypass the need for labelling at scale. Despite this, existing techniques such as transfer learning, data augmentation and semi-supervised learning have not lived up to expectations. Some of these techniques do not account for other classification challenges, such as a class-imbalance problem. Thus, these techniques mostly underperform when compared with fully supervised approaches. In this thesis, we propose new methods to train a deep model on image classification with a limited number of labeled examples. This was achieved by extending state-of-the-art generative adversarial networks with multiple fake classes and network switchers. These new features enabled us to train a classifier using large unlabeled data, while generating class specific samples. The proposed model is label agnostic and is suitable for different classification scenarios, ranging from weakly supervised to fully supervised settings. This was used to address classification challenges with limited labeled data and a class-imbalance problem. Extensive experiments were carried out on different benchmark datasets. Firstly, the proposed approach was used to train a classification model and our findings indicated that the proposed approach achieved better classification accuracies, especially when the number of labeled samples is small. Secondly, the proposed approach was able to generate high-quality samples from class-imbalance datasets. The samples' quality is evident in improved classification performances when generated samples were used in neutralising class-imbalance. The results are thoroughly analyzed and, overall, our method showed superior performances over popular resampling technique and the AC-GAN model. Finally, we successfully applied the proposed approach as a new augmentation technique to two challenging real-world problems: face with attributes and legacy engineering drawings. The results obtained demonstrate that the proposed approach is effective even in extreme cases
- …