58 research outputs found

    Fast Full-frame Video Stabilization with Iterative Optimization

    Full text link
    Video stabilization refers to the problem of transforming a shaky video into a visually pleasing one. The question of how to strike a good trade-off between visual quality and computational speed has remained one of the open challenges in video stabilization. Inspired by the analogy between wobbly frames and jigsaw puzzles, we propose an iterative optimization-based learning approach using synthetic datasets for video stabilization, which consists of two interacting submodules: motion trajectory smoothing and full-frame outpainting. First, we develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field. The confidence map associated with the estimated optical flow is exploited to guide the search for shared regions through backpropagation. Second, we take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views. An important new insight brought about by our iterative optimization approach is that the target video can be interpreted as the fixed point of nonlinear mapping for video stabilization. We formulate video stabilization as a problem of minimizing the amount of jerkiness in motion trajectories, which guarantees convergence with the help of fixed-point theory. Extensive experimental results are reported to demonstrate the superiority of the proposed approach in terms of computational speed and visual quality. The code will be available on GitHub.Comment: Accepted by ICCV202

    Estabilização digital de vídeos : algoritmos e avaliação

    Get PDF
    Orientador: Hélio PedriniDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O desenvolvimento de equipamentos multimídia permitiu um crescimento significativo na produção de vídeos por meio de câmeras, celulares e outros dispositivos móveis. No entanto, os vídeos capturados por esses dispositivos estão sujeitos a movimentos indesejados devido à vibração da câmera. Para superar esse problema, a estabilização digital visa remover o movimento indesejado dos vídeos pela aplicação de ferramentas computacionais, sem o uso de hardware específico, para melhorar a qualidade visual das cenas de forma a melhorar aspectos do vídeo segundo a percepção humana ou facilitar aplicações finais, como detecção e rastreamento de objetos. O processo de estabilização digital de vídeos bidimensional geralmente é dividido em três etapas principais: estimativa de movimento da câmera, remoção do movimento indesejado e geração do vídeo corrigido. Neste trabalho, investigamos e avaliamos métodos de estabilização digital de vídeos para corrigir vibrações e instabilidades que ocorrem durante o processo de aquisição. Na etapa de estimativa de movimento, desenvolvemos e analisamos um método consensual para combinar um conjunto de técnicas de características locais para estimativa do movimento global. Também apresentamos e testamos uma nova abordagem que identifica falhas na estimativa do movimento da câmera por meio de técnicas de otimização e calcula uma estimativa corrigida. Na etapa de remoção do movimento indesejável, propomos e avaliamos uma nova abordagem para estabilização de vídeos com base em um filtro Gaussiano adaptativo para suavizar a trajetória da câmera. Devido a incoerências existentes nas medidas de avaliação disponíveis na literatura em relação à percepção humana, duas representações são propostas para avaliar qualitativamente os métodos de estabilização de vídeos: a primeira baseia-se em ritmos visuais e representa o comportamento do movimento do vídeo, enquanto que a segunda é baseada na imagem da energia do movimento e representa a quantidade de movimento presente no vídeo. Experimentos foram realizados em três bases de dados. A primeira consiste em onze vídeos disponíveis na base de dados GaTech VideoStab e outros três vídeos coletados separadamente. A segunda, proposta por Liu et al., consiste em 139 vídeos divididos em diferentes categorias. Finalmente, propomos uma base de dados complementar às demais, composta a partir de quatro vídeos coletados separadamente. Trechos dos vídeos originais com presença de objetos em movimento e com fundo pouco representativo foram extraídos, gerando-se um total de oito vídeos. Resultados experimentais demonstraram a eficácia das representações visuais como medida qualitativa para avaliar a estabilidade dos vídeos, bem como o método de combinação de características locais. O método proposto baseado em otimização foi capaz de detectar e corrigir falhas de estimativa de movimento, obtendo resultados significativamente superiores em relação à não aplicação dessa correção. O filtro Gaussiano adaptativo permitiu gerar vídeos com equilíbrio adequado entre a taxa de estabilização e a quantidade de pixels preservados nos quadros dos vídeos. Os resultados alcançados como o nosso método de otimização nos vídeos da base de dados proposta foram superiores aos obtidos pelo método implementado no YouTubeAbstract: The development of multimedia equipments has allowed a significant growth in the production of videos through professional and amateur cameras, smartphones and other mobile devices. However, videos captured by these devices are subject to unwanted vibrations due to camera shaking. To overcome such problem, digital stabilization aims to remove undesired motion from videos through software techniques, without the use of specific hardware, to enhance visual quality either with the intention of enhancing human perception or improving final applications, such as detection and tracking of objects. The two-dimensional digital video stabilization process is usually divided into three main steps: camera motion estimation, removal of unwanted motion, and generation of the corrected video. In this work, we investigate and evaluate digital video stabilization methods for correcting disturbances and instabilities that occur during the process of video acquisition. In the motion estimation step, we develop and analyzed a consensual method for combining a set of local feature techniques for global motion estimation. We also introduce and test a novel approach that identifies failures in the global motion estimation of the camera through optimization and computes a new estimate of the corrected motion. In the removal of unwanted motion step, we propose and evaluate a novel approach to video stabilization based on an adaptive Gaussian filter to smooth the camera path. Due to the incoherence of assessment measures available in the literature regarding human perception, two novel representations are proposed for qualitative evaluation of video stabilization methods: the first is based on the visual rhythms and represents the behavior of the video motion, whereas the second is based on the motion energy image and represents the amount of motion present in the video. Experiments are conducted on three video databases. The first consists of eleven videos available from the GaTech VideoStab database, and three other videos collected separately. The second, proposed by Liu et al., consists of 139 videos divided into different categories. Finally, we propose a database that is complementary to the others, composed from four videos collected separately, which are excerpts from the original videos with moving objects in the foreground and with little representative background extracted, resulting in eight final videos. Experimental results demonstrated the effectiveness of the visual representations as qualitative measure for evaluating video stability, as well as the combination method over individual local feature approaches. The proposed method based on optimization was able to detect and correct the motion estimation failures, achieving considerably superior results compared to when this correction is not applied. The adaptive Gaussian filter allowed to generate videos with adequate trade-off between stabilization rate and amount of frame pixels. The results reached with our optimization method for the videos of the proposed database were superior to those obtained with YouTube's state-of-the-art methodMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Parallel Tracking and Mapping for Manipulation Applications with Golem Krang

    Get PDF
    Implementing a simultaneous localization and mapping system and an image semantic segmentation method on a mobile manipulation. The application of the SLAM is working towards navigating among obstacles in unknown environments. The object detection method will be integrated for future manipulation tasks such as grasping. This work will be demonstrated on a real robotics hardware system in the lab.Outgoin

    StableFlow: a physics inspired digital video stabilization

    Get PDF
    This thesis addresses the problem of digital video stabilization. With the widespread use of handheld devices and unmanned aerial vehicles (UAVs) that has the ability to record videos, digital video stabilization becomes more important as the videos are often shaky undermining the visual quality of the video. Digital video stabilization has been studied for decades yielding an extensive amount of literature in the field, however, current approaches suffer from either being computationally expensive or under-performing in terms of visual quality . In this thesis, we firstly introduce a novel study of the effect of image denoising on feature-based digital video stabilization. Then, we introduce SteadyFlow, a novel technique for real-time stabilization inspired by the mass spring damper model. A video frame is modelled as a mass suspended in each direction by a critically dampened spring and damper which can be fine-tuned to adapt with different shaking patterns. The proposed technique is tested on video sequences that have different types of shakiness and diverse video contents. The obtained results significantly outperforms state-of-the art stabilization techniques in terms of visual quality while performing in real time

    高速ビジョンを用いたリアルタイムビデオモザイキングと安定化に関する研究

    Get PDF
    広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Get PDF
    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors
    corecore