11,050 research outputs found
Drone Shadow Tracking
Aerial videos taken by a drone not too far above the surface may contain the
drone's shadow projected on the scene. This deteriorates the aesthetic quality
of videos. With the presence of other shadows, shadow removal cannot be
directly applied, and the shadow of the drone must be tracked. Tracking a
drone's shadow in a video is, however, challenging. The varying size, shape,
change of orientation and drone altitude pose difficulties. The shadow can also
easily disappear over dark areas. However, a shadow has specific properties
that can be leveraged, besides its geometric shape. In this paper, we
incorporate knowledge of the shadow's physical properties, in the form of
shadow detection masks, into a correlation-based tracking algorithm. We capture
a test set of aerial videos taken with different settings and compare our
results to those of a state-of-the-art tracking algorithm.Comment: 5 pages, 4 figure
Analysis and enhancement of the denoising depth data using kinect through iterative technique
Since the release of Kinect by Microsoft, the, accuracy and stability of Kinect data-such as depth map, has been essential and important element of research and data analysis. In order to develop efficient means of analyzing and using the kinnect data, researchers require high quality of depth data during the preprocessing step, which is very crucial for accurate results. One of the most important concerns of researchers is to eliminate image noise and convert image and video to the best quality. In this paper, different types of the noise for Kinect are analyzed and a unique technique is used, to reduce the background noise based on distance between Kinect devise and the user. Whereas, for shadow removal, the iterative method is used to eliminate the shadow casted by the Kinect. A 3D depth image is obtained as a result with good quality and accuracy. Further, the results of this present study reveal that the image background is eliminated completely and the 3D image quality in depth map has been enhanced
High-Resolution Document Shadow Removal via A Large-Scale Real-World Dataset and A Frequency-Aware Shadow Erasing Net
Shadows often occur when we capture the documents with casual equipment,
which influences the visual quality and readability of the digital copies.
Different from the algorithms for natural shadow removal, the algorithms in
document shadow removal need to preserve the details of fonts and figures in
high-resolution input. Previous works ignore this problem and remove the
shadows via approximate attention and small datasets, which might not work in
real-world situations. We handle high-resolution document shadow removal
directly via a larger-scale real-world dataset and a carefully designed
frequency-aware network. As for the dataset, we acquire over 7k couples of
high-resolution (2462 x 3699) images of real-world document pairs with various
samples under different lighting circumstances, which is 10 times larger than
existing datasets. As for the design of the network, we decouple the
high-resolution images in the frequency domain, where the low-frequency details
and high-frequency boundaries can be effectively learned via the carefully
designed network structure. Powered by our network and dataset, the proposed
method clearly shows a better performance than previous methods in terms of
visual quality and numerical results. The code, models, and dataset are
available at: https://github.com/CXH-Research/DocShadow-SD7KComment: Accepted by International Conference on Computer Vision 2023 (ICCV
2023
Grassmann Averages for Scalable Robust PCA
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase – “big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), which expresses dimensionality reduction as an average of the subspaces spanned by the data. Because averages can be efficiently computed, we immediately gain scalability. GA is inherently more robust than PCA, but we show that they coincide for Gaussian data. We exploit that averages can be made robust to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. Robustness can be with respect to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie
A new protocol for texture mapping process and 2d representation of rupestrian architecture
The development of the survey techniques for architecture and archaeology requires a general review in the methods used for the representation of numerical data. The possibilities offered by data processing allow to find new paths for studying issues connected to the drawing discipline. The research project aimed at experimenting different approaches for the representation of the rupestrian architecture and the texture mapping process. The nature of the rupestrian architecture does not allow a traditional representation of sections and projections of edges and outlines. The paper presents a method, the Equidistant Multiple Sections (EMS), inspired by cartography and based on the use of isohipses generated from different geometric plane. A specific paragraph is dedicated to the
texture mapping process for unstructured surface models. One of the main difficulty in the image projection consists in the recognition of homologous points between image and point cloud, above all in the areas with most deformations. With the aid of the “virtual scan” tool a different procedure was developed for improving the correspondences of the image. The result show a sensible improvement of the entire process above all for the architectural vaults. A detailed study concerned the unfolding of the straight line surfaces; the barrel vault of the analyzed chapel has been unfolded for observing the paintings in the real shapes out of the morphological context
- …