20,581 research outputs found

    An Automated Method for Tracking Clouds in Planetary Atmospheres

    Get PDF
    We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter’s white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter’s atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time

    Enhancing retinal images by nonlinear registration

    Full text link
    Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging at very high spatial resolution in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing retinal images and detect abnormal features. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be of high interest in a medical diagnosis.Comment: 21 pages, 7 figures, submitted to Optics Communication

    Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution

    Full text link
    Image and video quality in Long Range Observation Systems (LOROS) suffer from atmospheric turbulence that causes small neighbourhoods in image frames to chaotically move in different directions and substantially hampers visual analysis of such image and video sequences. The paper presents a real-time algorithm for perfecting turbulence degraded videos by means of stabilization and resolution enhancement. The latter is achieved by exploiting the turbulent motion. The algorithm involves generation of a reference frame and estimation, for each incoming video frame, of a local image displacement map with respect to the reference frame; segmentation of the displacement map into two classes: stationary and moving objects and resolution enhancement of stationary objects, while preserving real motion. Experiments with synthetic and real-life sequences have shown that the enhanced videos, generated in real time, exhibit substantially better resolution and complete stabilization for stationary objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma de Mallorca, Spai

    Decorrelation Times of Photospheric Fields and Flows

    Full text link
    We use autocorrelation to investigate evolution in flow fields inferred by applying Fourier Local Correlation Tracking (FLCT) to a sequence of high-resolution (0.3 \arcsec), high-cadence (≃2\simeq 2 min) line-of-sight magnetograms of NOAA active region (AR) 10930 recorded by the Narrowband Filter Imager (NFI) of the Solar Optical Telescope (SOT) aboard the {\em Hinode} satellite over 12--13 December 2006. To baseline the timescales of flow evolution, we also autocorrelated the magnetograms, at several spatial binnings, to characterize the lifetimes of active region magnetic structures versus spatial scale. Autocorrelation of flow maps can be used to optimize tracking parameters, to understand tracking algorithms' susceptibility to noise, and to estimate flow lifetimes. Tracking parameters varied include: time interval Δt\Delta t between magnetogram pairs tracked, spatial binning applied to the magnetograms, and windowing parameter σ\sigma used in FLCT. Flow structures vary over a range of spatial and temporal scales (including unresolved scales), so tracked flows represent a local average of the flow over a particular range of space and time. We define flow lifetime to be the flow decorrelation time, τ\tau. For Δt>τ\Delta t > \tau, tracking results represent the average velocity over one or more flow lifetimes. We analyze lifetimes of flow components, divergences, and curls as functions of magnetic field strength and spatial scale. We find a significant trend of increasing lifetimes of flow components, divergences, and curls with field strength, consistent with Lorentz forces partially governing flows in the active photosphere, as well as strong trends of increasing flow lifetime and decreasing magnitudes with increases in both spatial scale and Δt\Delta t.Comment: 48 pages, 20 figures, submitted to the Astrophysical Journal; full-resolution images in manuscript (8MB) at http://solarmuri.ssl.berkeley.edu/~welsch/public/manuscripts/flow_lifetimes_v2.pd

    Micro-expression Recognition using Spatiotemporal Texture Map and Motion Magnification

    Get PDF
    Micro-expressions are short-lived, rapid facial expressions that are exhibited by individuals when they are in high stakes situations. Studying these micro-expressions is important as these cannot be modified by an individual and hence offer us a peek into what the individual is actually feeling and thinking as opposed to what he/she is trying to portray. The spotting and recognition of micro-expressions has applications in the fields of criminal investigation, psychotherapy, education etc. However due to micro-expressions’ short-lived and rapid nature; spotting, recognizing and classifying them is a major challenge. In this paper, we design a hybrid approach for spotting and recognizing micro-expressions by utilizing motion magnification using Eulerian Video Magnification and Spatiotemporal Texture Map (STTM). The validation of this approach was done on the spontaneous micro-expression dataset, CASMEII in comparison with the baseline. This approach achieved an accuracy of 80% viz. an increase by 5% as compared to the existing baseline by utilizing 10-fold cross validation using Support Vector Machines (SVM) with a linear kernel

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set

    Horizontal flow fields observed in Hinode G-band images. I. Methods

    Full text link
    Context: The interaction of plasma motions and magnetic fields is an important mechanism, which drives solar activity in all its facets. For example, photospheric flows are responsible for the advection of magnetic flux, the redistribution of flux during the decay of sunspots, and the built-up of magnetic shear in flaring active regions. Aims: Systematic studies based on G-band data from the Japanese Hinode mission provide the means to gather statistical properties of horizontal flow fields. This facilitates comparative studies of solar features, e.g., G-band bright points, magnetic knots, pores, and sunspots at various stages of evolution and in distinct magnetic environments, thus, enhancing our understanding of the dynamic Sun. Methods: We adapted Local Correlation Tracking (LCT) to measure horizontal flow fields based on G-band images obtained with the Solar Optical Telescope on board Hinode. In total about 200 time-series with a duration between 1-16 h and a cadence between 15-90 s were analyzed. Selecting both a high-cadence (dt = 15 s) and a long-duration (dT = 16 h) time-series enabled us to optimize and validate the LCT input parameters, hence, ensuring a robust, reliable, uniform, and accurate processing of a huge data volume. Results: The LCT algorithm produces best results for G-band images having a cadence of 60-90 s. If the cadence is lower, the velocity of slowly moving features will not be reliably detected. If the cadence is higher, the scene on the Sun will have evolved too much to bear any resemblance with the earlier situation. Consequently, in both instances horizontal proper motions are underestimated. The most reliable and yet detailed flow maps are produced using a Gaussian kernel with a size of 2560 km x 2560 km and a full-width-at-half-maximum (FWHM) of 1200 km (corresponding to the size of a typical granule) as sampling window.Comment: 12 pages, 8 figures, 4 tables, accepted for publication in Astronomy and Astrophysic
    • 

    corecore