10 research outputs found

    Assembling convolution neural networks for automatic viewing transformation

    Get PDF
    Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3D ground plane is firstly derived from the RGB image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in terms of accuracy and robustness

    Video Upright Adjustment and Stabilization

    Get PDF
    Upright adjustment, Video stabilization, Camera pathWe propose a novel video upright adjustment method that can reliably correct slanted video contents that are often found in casual videos. Our approach combines deep learning and Bayesian inference to estimate accurate rotation angles from video frames. We train a convolutional neural network to obtain initial estimates of the rotation angles of input video frames. The initial estimates from the network are temporally inconsistent and inaccurate. To resolve this, we use Bayesian inference. We analyze estimation errors of the network, and derive an error model. We then use the error model to formulate video upright adjustment as a maximum a posteriori problem where we estimate consistent rotation angles from the initial estimates, while respecting relative rotations between consecutive frames. Finally, we propose a joint approach to video stabilization and upright adjustment, which minimizes information loss caused by separately handling stabilization and upright adjustment. Experimental results show that our video upright adjustment method can effectively correct slanted video contents, and its combination with video stabilization can achieve visually pleasing results from shaky and slanted videos.openI. INTRODUCTION 1.1. Related work II. ROTATION ESTIMATION NETWORK III. ERROR ANALYSIS IV. VIDEO UPRIGHT ADJUSTMENT 4.1. Initial angle estimation 4.2. Robust angle estimation 4.3. Optimization 4.4. Warping V. JOINT UPRIGHT ADJUSTMENT AND STABILIZATION 5.1. Bundled camera paths for video stabilization 5.2. Joint approach VI. EXPERIMENTS VII. CONCLUSION ReferencesCNN)을 훈련시킨다. 신경망의 초기 추정치는 완전히 정확하지 않으며 시간적으로도 일관되지 않는다. 이를 해결하기 위해 베이지안 인퍼런스를 사용한다. 본 논문은 신경망의 추정 오류를 분석하고 오류 모델을 도출한다. 그런 다음 오류 모델을 사용하여 연속 프레임 간의 상대 회전 각도(Relative rotation angle)를 반영하면서 초기 추정치로부터 시간적으로 일관된 회전 각도를 추정하는 최대 사후 문제(Maximum a posteriori problem)로 동영상 수평 보정을 공식화한다. 마지막으로, 동영상 수평 보정 및 동영상 안정화(Video stabilization)에 대한 동시 접근 방법을 제안하여 수평 보정과 안정화를 별도로 수행할 때 발생하는 공간 정보 손실과 연산량을 최소화하며 안정화의 성능을 최대화한다. 실험 결과에 따르면 동영상 수평 보정으로 기울어진 동영상을 효과적으로 보정할 수 있으며 동영상 안정화 방법과 결합하여 흔들리고 기울어진 동영상으로부터 시각적으로 만족스러운 새로운 동영상을 획득할 수 있다.본 논문은 일반인들이 촬영한 동영상에서 흔히 발생하는 문제인 기울어짐을 제거하여 수평이 올바른 동영상을 획득할 수 있게 하는 동영상 수평 보정(Video upright adjustment) 방법을 제안한다. 본 논문의 접근 방식은 딥 러닝(Deep learning)과 베이지안 인퍼런스(Bayesian inference)를 결합하여 동영상 프레임(Frame)에서 정확한 각도를 추정한다. 먼저 입력 동영상 프레임의 회전 각도의 초기 추정치를 얻기 위해 회선 신경망(Convolutional neural networkMasterdCollectio

    Automated Semantic Content Extraction from Images

    Get PDF
    In this study, an automatic semantic segmentation and object recognition methodology is implemented which bridges the semantic gap between low level features of image content and high level conceptual meaning. Semantically understanding an image is essential in modeling autonomous robots, targeting customers in marketing or reverse engineering of building information modeling in the construction industry. To achieve an understanding of a room from a single image we proposed a new object recognition framework which has four major components: segmentation, scene detection, conceptual cueing and object recognition. The new segmentation methodology developed in this research extends Felzenswalb\u27s cost function to include new surface index and depth features as well as color, texture and normal features to overcome issues of occlusion and shadowing commonly found in images. Adding depth allows capturing new features for object recognition stage to achieve high accuracy compared to the current state of the art. The goal was to develop an approach to capture and label perceptually important regions which often reflect global representation and understanding of the image. We developed a system by using contextual and common sense information for improving object recognition and scene detection, and fused the information from scene and objects to reduce the level of uncertainty. This study in addition to improving segmentation, scene detection and object recognition, can be used in applications that require physical parsing of the image into objects, surfaces and their relations. The applications include robotics, social networking, intelligence and anti-terrorism efforts, criminal investigations and security, marketing, and building information modeling in the construction industry. In this dissertation a structural framework (ontology) is developed that generates text descriptions based on understanding of objects, structures and the attributes of an image

    Exploitation d'indices visuels liés au mouvement pour l'interprétation du contenu des séquences vidéos

    Get PDF
    L'interprétation du contenu des séquences vidéo est un des principaux domaines de recherche en vision artificielle. Dans le but d'enrichir l'information provenant des indices visuels qui sont propres à une seule image, on peut se servir d'indices découlant du mouvement entre les images. Ce mouvement peut être causé par un changement d'orientation ou de position du système d'acquisition, par un déplacement des objets dans la scène, et par bien d'autres facteurs. Je me suis intéressé à deux phénomènes découlant du mouvement dans les séquences vidéo. Premièrement, le mouvement causé par la caméra, et comment il est possible de l'interpréter par une combinaison du mouvement apparent entre les images, et du déplacement de points de fuite dans ces images. Puis, je me suis intéressé à la détection et la classification du phénomène d'occultation, qui est causé par le mouvement dans une scène complexe, grâce à un modèle géométrique dans le volume spatio-temporel. Ces deux travaux sont présentés par le biais de deux articles soumis pour publication dans des revues scientifiques

    Two Case Studies on Vision-based Moving Objects Measurement

    Get PDF
    In this thesis, we presented two case studies on vision-based moving objects measurement. In the first case, we used a monocular camera to perform ego-motion estimation for a robot in an urban area. We developed the algorithm based on vertical line features such as vertical edges of buildings and poles in an urban area, because vertical lines are easy to be extracted, insensitive to lighting conditions/shadows, and sensitive to camera/robot movements on the ground plane. We derived an incremental estimation algorithm based on the vertical line pairs. We analyzed how errors are introduced and propagated in the continuous estimation process by deriving the closed form representation of covariance matrix. Then, we formulated the minimum variance ego-motion estimation problem into a convex optimization problem, and solved the problem with the interior-point method. The algorithm was extensively tested in physical experiments and compared with two popular methods. Our estimation results consistently outperformed the two counterparts in robustness, speed, and accuracy. In the second case, we used a camera-mirror system to measure the swimming motion of a live fish and the extracted motion data was used to drive animation of fish behavior. The camera-mirror system captured three orthogonal views of the fish. We also built a virtual fish model to assist the measurement of the real fish. The fish model has a four-link spinal cord and meshes attached to the spinal cord. We projected the fish model into three orthogonal views and matched the projected views with the real views captured by the camera. Then, we maximized the overlapping area of the fish in the projected views and the real views. The maximization result gave us the position, orientation, and body bending angle for the fish model that was used for the fish movement measurement. Part of this algorithm is still under construction and will be updated in the future

    Vision-Aided Pedestrian Navigation for Challenging GNSS Environments

    Get PDF
    There is a strong need for an accurate pedestrian navigation system, functional also in GNSS challenging environments, namely urban areas and indoors, for improved safety and to enhance everyday life. Pedestrian navigation is mainly needed in these environments that are challenging for GNSS but also for other RF positioning systems and some non-RF systems such as the magnetometry used for heading due to the presence of ferrous material. Indoor and urban navigation has been an active research area for years. There is no individual system at this time that can address all needs set for pedestrian navigation in these environments, but a fused solution of different sensors can provide better accuracy, availability and continuity. Self-contained sensors, namely digital compasses for measuring heading, gyroscopes for heading changes and accelerometers for the user speed, constitute a good option for pedestrian navigation. However, their performance suffers from noise and biases that result in large position errors increasing with time. Such errors can however be mitigated using information about the user motion obtained from consecutive images taken by a camera carried by the user, provided that its position and orientation with respect to the user’s body are known. The motion of the features in the images may then be transformed into information about the user’s motion. Due to its distinctive characteristics, this vision-aiding complements other positioning technologies in order to provide better pedestrian navigation accuracy and reliability. This thesis discusses the concepts of a visual gyroscope that provides the relative user heading and a visual odometer that provides the translation of the user between the consecutive images. Both methods use a monocular camera carried by the user. The visual gyroscope monitors the motion of virtual features, called vanishing points, arising from parallel straight lines in the scene, and from the change of their location that resolves heading, roll and pitch. The method is applicable to the human environments as the straight lines in the structures enable the vanishing point perception. For the visual odometer, the ambiguous scale arising when using the homography between consecutive images to observe the translation is solved using two different methods. First, the scale is computed using a special configuration intended for indoors. Secondly, the scale is resolved using differenced GNSS carrier phase measurements of the camera in a method aimed at urban environments, where GNSS can’t perform alone due to tall buildings blocking the required line-of-sight to four satellites. However, the use of visual perception provides position information by exploiting a minimum of two satellites and therefore the availability of navigation solution is substantially increased. Both methods are sufficiently tolerant for the challenges of visual perception in indoor and urban environments, namely low lighting and dynamic objects hindering the view. The heading and translation are further integrated with other positioning systems and a navigation solution is obtained. The performance of the proposed vision-aided navigation was tested in various environments, indoors and urban canyon environments to demonstrate its effectiveness. These experiments, although of limited durations, show that visual processing efficiently complements other positioning technologies in order to provide better pedestrian navigation accuracy and reliability

    Shaped-based IMU/Camera Tightly Coupled Object-level SLAM using Rao-Blackwellized Particle Filtering

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a decades-old problem. The classical solution to this problem utilizes entities such as feature points that cannot facilitate the interactions between a robot and its environment (e.g., grabbing objects). Recent advances in deep learning have paved the way to accurately detect objects in the image under various illumination conditions and occlusions. This led to the emergence of object-level solutions to the SLAM problem. Current object-level methods depend on an initial solution using classical approaches and assume that errors are Gaussian. This research develops a standalone solution to object-level SLAM that integrates the data from a monocular camera and an IMU (available in low-end devices) using Rao Blackwellized Particle Filter (RBPF). RBPF does not assume Gaussian distribution for the error; thus, it can handle a variety of scenarios (such as when a symmetrical object with pose ambiguities is encountered). The developed method utilizes shape instead of texture; therefore, texture-less objects can be incorporated into the solution. In the particle weighing process, a new method is developed that utilizes the Intersection over the Union (IoU) area of the observed and projected boundaries of the object that does not require point-to-point correspondence. Thus, it is not prone to false data correspondences. Landmark initialization is another important challenge for object-level SLAM. In the state-of-the-art delayed initialization, the trajectory estimation only relies on the motion model provided by IMU mechanization (during the initialization), leading to large errors. In this thesis, two novel undelayed initializations are developed. One relies only on a monocular camera and IMU, and the other utilizes an ultrasonic rangefinder as well. The developed object-level SLAM is tested using wheeled robots and handheld devices, and an error (in the position) of 4.1 to 13.1 cm (0.005 to 0.028 of the total path length) has been obtained through extensive experiments using only a single object. These experiments are conducted in different indoor environments under different conditions (e.g. illumination). Further, it is shown that undelayed initialization using an ultrasonic sensor can reduce the algorithm's runtime by half

    Synthetic image generation and the use of virtual environments for image enhancement tasks

    Get PDF
    Deep learning networks are often difficult to train if there are insufficient image samples. Gathering real-world images tailored for a specific job takes a lot of work to perform. This dissertation explores techniques for synthetic image generation and virtual environments for various image enhancement/ correction/restoration tasks, specifically distortion correction, dehazing, shadow removal, and intrinsic image decomposition. First, given various image formation equations, such as those used in distortion correction and dehazing, synthetic image samples can be produced, provided that the equation is well-posed. Second, using virtual environments to train various image models is applicable for simulating real-world effects that are otherwise difficult to gather or replicate, such as dehazing and shadow removal. Given synthetic images, one cannot train a network directly on it as there is a possible gap between the synthetic and real domains. We have devised several techniques for generating synthetic images and formulated domain adaptation methods where our trained deep-learning networks perform competitively in distortion correction, dehazing, and shadow removal. Additional studies and directions are provided for the intrinsic image decomposition problem and the exploration of procedural content generation, where a virtual Philippine city was created as an initial prototype. Keywords: image generation, image correction, image dehazing, shadow removal, intrinsic image decomposition, computer graphics, rendering, machine learning, neural networks, domain adaptation, procedural content generation
    corecore