4,748 research outputs found

    Cassini observations reveal a regime of zonostrophic macroturbulence on Jupiter

    Get PDF
    In December 2000, the Cassini fly-by near Jupiter delivered high-resolution images of Jupiter’s clouds over the entire planet in a band between 50°N and 50°S. Three daily-averaged two-dimensional velocity snapshots extracted from these images are used to perform spectral analysis of jovian atmospheric macroturbulence. A similar analysis is also performed on alternative data documented by Choi and Showman (Choi, D., Showman, A. [2011]. Icarus 216, 597–609), based on a different method of image processing. The inter-comparison of the products of both analyses ensures a better constraint of the spectral estimates. Both analyses reveal strong anisotropy of the kinetic energy spectrum. The zonal spectrum is very steep and most of the kinetic energy resides in slowly evolving, alternating zonal (west–east) jets, while the non-zonal, or residual spectrum obeys the Kolmogorov–Kraichnan law specific to two-dimensional turbulence in the range of the inverse energy cascade. The spectral data is used to estimate the inverse cascade rate ∊ and the zonostrophy index Rβ for the first time. Although both datasets yield somewhat different values of ∊, it is estimated to be in the range 0.5–1.0 × 10−5 m2 s−3. The ensuing values of Rβ ≳ 5 belong well in the range of zonostrophic turbulence whose threshold corresponds to Rβ ≃ 2.5. We infer that the large-scale circulation is maintained by an anisotropic inverse energy cascade. The removal of the Great Red Spot from both datasets has no significant effect upon either the spectra or the inverse cascade rate. The spectral data are used to compute the rate of the energy exchange, W, between the non-zonal structures and the large-scale zonal flow. It is found that instantaneous values of W may exceed ∊ by an order of magnitude. Previous numerical simulations with a barotropic model suggest that W and ∊ attain comparable values only after averaging of W over a sufficiently long time. Near-instantaneous values of W that have been routinely used to infer the rate of the kinetic energy supply to Jupiter’s zonal flow may therefore significantly overestimate ∊. This disparity between W and ∊ may resolve the long-standing conundrum of an unrealistically high rate of energy transfer to the zonal flow. The meridional diffusivity Kϕ in the regime of zonostrophic turbulence is given by an expression that depends on ∊. The value of Kϕ estimated from the spectra is compared against data from the dispersion of stratospheric gases and debris resulting from the Shoemaker-Levy 9 comet and Wesley asteroid impacts in 1994 and 2009 respectively. Not only is Kϕ found to be consistent with estimates for both impacts, but the eddy diffusivity found from observations appears to be scale-independent. This behaviour could be a consequence of the interaction between anisotropic turbulence and Rossby waves specific to the regime of zonostrophic macroturbulence

    Multiperspective mosaics and layered representation for scene visualization

    Get PDF
    This thesis documents the efforts made to implement multiperspective mosaicking for the purpose of mosaicking undervehicle and roadside sequences. For the undervehicle sequences, it is desired to create a large, high-resolution mosaic that may used to quickly inspect the entire scene shot by a camera making a single pass underneath the vehicle. Several constraints are placed on the video data, in order to facilitate the assumption that the entire scene in the sequence exists on a single plane. Therefore, a single mosaic is used to represent a single video sequence. Phase correlation is used to perform motion analysis in this case. For roadside video sequences, it is assumed that the scene is composed of several planar layers, as opposed to a single plane. Layer extraction techniques are implemented in order to perform this decomposition. Instead of using phase correlation to perform motion analysis, the Lucas-Kanade motion tracking algorithm is used in order to create dense motion maps. Using these motion maps, spatial support for each layer is determined based on a pre-initialized layer model. By separating the pixels in the scene into motion-specific layers, it is possible to sample each element in the scene correctly while performing multiperspective mosaicking. It is also possible to fill in many gaps in the mosaics caused by occlusions, hence creating more complete representations of the objects of interest. The results are several mosaics with each mosaic representing a single planar layer of the scene

    Video Processing with Additional Information

    Get PDF
    Cameras are frequently deployed along with many additional sensors in aerial and ground-based platforms. Many video datasets have metadata containing measurements from inertial sensors, GPS units, etc. Hence the development of better video processing algorithms using additional information attains special significance. We first describe an intensity-based algorithm for stabilizing low resolution and low quality aerial videos. The primary contribution is the idea of minimizing the discrepancy in the intensity of selected pixels between two images. This is an application of inverse compositional alignment for registering images of low resolution and low quality, for which minimizing the intensity difference over salient pixels with high gradients results in faster and better convergence than when using all the pixels. Secondly, we describe a feature-based method for stabilization of aerial videos and segmentation of small moving objects. We use the coherency of background motion to jointly track features through the sequence. This enables accurate tracking of large numbers of features in the presence of repetitive texture, lack of well conditioned feature windows etc. We incorporate the segmentation problem within the joint feature tracking framework and propose the first combined joint-tracking and segmentation algorithm. The proposed approach enables highly accurate tracking, and segmentation of feature tracks that is used in a MAP-MRF framework for obtaining dense pixelwise labeling of the scene. We demonstrate competitive moving object detection in challenging video sequences of the VIVID dataset containing moving vehicles and humans that are small enough to cause background subtraction approaches to fail. Structure from Motion (SfM) has matured to a stage, where the emphasis is on developing fast, scalable and robust algorithms for large reconstruction problems. The availability of additional sensors such as inertial units and GPS along with video cameras motivate the development of SfM algorithms that leverage these additional measurements. In the third part, we study the benefits of the availability of a specific form of additional information - the vertical direction (gravity) and the height of the camera both of which can be conveniently measured using inertial sensors, and a monocular video sequence for 3D urban modeling. We show that in the presence of this information, the SfM equations can be rewritten in a bilinear form. This allows us to derive a fast, robust, and scalable SfM algorithm for large scale applications. The proposed SfM algorithm is experimentally demonstrated to have favorable properties compared to the sparse bundle adjustment algorithm. We provide experimental evidence indicating that the proposed algorithm converges in many cases to solutions with lower error than state-of-art implementations of bundle adjustment. We also demonstrate that for the case of large reconstruction problems, the proposed algorithm takes lesser time to reach its solution compared to bundle adjustment. We also present SfM results using our algorithm on the Google StreetView research dataset, and several other datasets

    Sub-Riemannian geometry and its applications to Image Processing

    Get PDF
    Master's Thesis in MathematicsMAT399MAMN-MA

    A virtual reality system using the concentric mosaic: Construction, rendering, and data compression

    Get PDF
    This paper proposes a new image-based rendering (IBR) technique called "concentric mosaic" for virtual reality applications. IBR using the plenoptic function is an efficient technique for rendering new views of a scene from a collection of sample images previously captured. It provides much better image quality and lower computational requirement for rendering than conventional three-dimensional (3-D) model-building approaches. The concentric mosaic is a 3-D plenoptic function with viewpoints constrained on a plane. Compared with other more sophisticated four-dimensional plenoptic functions such as the light field and the lumigraph, the file size of a concentric mosaic is much smaller. In contrast to a panorama, the concentric mosaic allows users to move freely in a circular region and observe significant parallax and lighting changes without recovering the geometric and photometric scene models. The rendering of concentric mosaics is very efficient, and involves the reordering and interpolating of previously captured slit images in the concentric mosaic. It typically consists of hundreds of high-resolution images which consume a significant amount of storage and bandwidth for transmission. An MPEG-like compression algorithm is therefore proposed in this paper taking into account the access patterns and redundancy of the mosaic images. The compression algorithms of two equivalent representations of the concentric mosaic, namely the multiperspective panoramas and the normal setup sequence, are investigated. A multiresolution representation of concentric mosaics using a nonlinear filter bank is also proposed.published_or_final_versio

    Remote sensing contributing to assess earthquake risk: from a literature review towards a roadmap

    Get PDF
    Remote sensing data and methods are widely deployed in order to contribute to the assessment of numerous components of earthquake risk. While for earthquake hazardrelated investigations, the use of remotely sensed data is an established methodological element with a long research tradition, earthquake vulnerability–centred assessments incorporating remote sensing data are increasing primarily in recent years. This goes along with a changing perspective of the scientific community which considers the assessment of vulnerability and its constituent elements as a pivotal part of a comprehensive risk analysis. Thereby, the availability of new sensors systems enables an appreciable share of remote sensing first. In this manner, a survey of the interdisciplinary conceptual literature dealing with the scientific perception of risk, hazard and vulnerability reveals the demand for a comprehensive description of earthquake hazards as well as an assessment of the present and future conditions of the elements exposed. A review of earthquake-related remote sensing literature, realized both in a qualitative and quantitative manner, shows the already existing and published manifold capabilities of remote sensing contributing to assess earthquake risk. These include earthquake hazard-related analysis such as detection and measurement of lineaments and surface deformations in pre- and post-event applications. Furthermore, pre-event seismic vulnerability–centred assessment of the built and natural environment and damage assessments for post-event applications are presented. Based on the review and the discussion of scientific trends and current research projects, first steps towards a roadmap for remote sensing are drawn, explicitly taking scientific, technical, multi- and transdisciplinary as well as political perspectives into account, which is intended to open possible future research activities
    corecore