926 research outputs found
Employing Feedback to Filter Caustic Waves in Underwater Scenes in Motion
A real-time approach for removing sunlight flickers from subaquatic scenarios captured in videos is presented. For this end, a de-flickering filter is designed. The start point is a moving landscape scene. Essentially, the filtering approach is based on morphological characteristics of the caustic waves. It constructs an a-priori de-flickered image which is afterwards enhanced. The algorithm employs feedback of optical flow fields and brightness in order to predict a one-step-ahead value of the brightness.Sociedad Argentina de Informática e Investigación Operativa (SADIO
Employing Feedback to Filter Caustic Waves in Underwater Scenes in Motion
A real-time approach for removing sunlight flickers from subaquatic scenarios captured in videos is presented. For this end, a de-flickering filter is designed. The start point is a moving landscape scene. Essentially, the filtering approach is based on morphological characteristics of the caustic waves. It constructs an a-priori de-flickered image which is afterwards enhanced. The algorithm employs feedback of optical flow fields and brightness in order to predict a one-step-ahead value of the brightness.Sociedad Argentina de Informática e Investigación Operativa (SADIO
Blind Video Deflickering by Neural Filtering with a Flawed Atlas
Many videos contain flickering artifacts. Common causes of flicker include
video processing algorithms, video generation algorithms, and capturing videos
under specific situations. Prior work usually requires specific guidance such
as the flickering frequency, manual annotations, or extra consistent videos to
remove the flicker. In this work, we propose a general flicker removal
framework that only receives a single flickering video as input without
additional guidance. Since it is blind to a specific flickering type or
guidance, we name this "blind deflickering." The core of our approach is
utilizing the neural atlas in cooperation with a neural filtering strategy. The
neural atlas is a unified representation for all frames in a video that
provides temporal consistency guidance but is flawed in many cases. To this
end, a neural network is trained to mimic a filter to learn the consistent
features (e.g., color, brightness) and avoid introducing the artifacts in the
atlas. To validate our method, we construct a dataset that contains diverse
real-world flickering videos. Extensive experiments show that our method
achieves satisfying deflickering performance and even outperforms baselines
that use extra guidance on a public benchmark.Comment: To appear in CVPR2023. Code:
github.com/ChenyangLEI/All-In-One-Deflicker Website:
chenyanglei.github.io/deflicke
Employing Feedback to Filter Caustic Waves in Underwater Scenes in Motion
A real-time approach for removing sunlight flickers from subaquatic scenarios captured in videos is presented. For this end, a de-flickering filter is designed. The start point is a moving landscape scene. Essentially, the filtering approach is based on morphological characteristics of the caustic waves. It constructs an a-priori de-flickered image which is afterwards enhanced. The algorithm employs feedback of optical flow fields and brightness in order to predict a one-step-ahead value of the brightness.Sociedad Argentina de Informática e Investigación Operativa (SADIO
Engineering data compendium. Human perception and performance. User's guide
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
Dynamic texture analysis in video with application to flame, smoke and volatile organic compound vapor detection
Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 74-82.Dynamic textures are moving image sequences that exhibit stationary characteristics
in time such as fire, smoke, volatile organic compound (VOC) plumes,
waves, etc. Most surveillance applications already have motion detection and
recognition capability, but dynamic texture detection algorithms are not integral
part of these applications. In this thesis, image processing based algorithms
for detection of specific dynamic textures are developed. Our methods can be
developed in practical surveillance applications to detect VOC leaks, fire and
smoke. The method developed for VOC emission detection in infrared videos
uses a change detection algorithm to find the rising VOC plume. The rising
characteristic of the plume is detected using a hidden Markov model (HMM).
The dark regions that are formed on the leaking equipment are found using a
background subtraction algorithm. Another method is developed based on an
active learning algorithm that is used to detect wild fires at night and close range
flames. The active learning algorithm is based on the Least-Mean-Square (LMS)
method. Decisions from the sub-algorithms, each of which characterize a certain
property of the texture to be detected, are combined using the LMS algorithm to reach a final decision. Another image processing method is developed to detect
fire and smoke from moving camera video sequences. The global motion
of the camera is compensated by finding an affine transformation between the
frames using optical flow and RANSAC. Three frame change detection methods
with motion compensation are used for fire detection with a moving camera. A
background subtraction algorithm with global motion estimation is developed
for smoke detection.Günay, OsmanM.S
A review of digital video tampering: from simple editing to full synthesis.
Video tampering methods have witnessed considerable progress in recent years. This is partly due to the rapid development of advanced deep learning methods, and also due to the large volume of video footage that is now in the public domain. Historically, convincing video tampering has been too labour intensive to achieve on a large scale. However, recent developments in deep learning-based methods have made it possible not only to produce convincing forged video but also to fully synthesize video content. Such advancements provide new means to improve visual content itself, but at the same time, they raise new challenges for state-of-the-art tampering detection methods. Video tampering detection has been an active field of research for some time, with periodic reviews of the subject. However, little attention has been paid to video tampering techniques themselves. This paper provides an objective and in-depth examination of current techniques related to digital video manipulation. We thoroughly examine their development, and show how current evaluation techniques provide opportunities for the advancement of video tampering detection. A critical and extensive review of photo-realistic video synthesis is provided with emphasis on deep learning-based methods. Existing tampered video datasets are also qualitatively reviewed and critically discussed. Finally, conclusions are drawn upon an exhaustive and thorough review of tampering methods with discussions of future research directions aimed at improving detection methods
- …