18,620 research outputs found

    Energy spectra in turbulent bubbly flows

    Get PDF
    We conduct experiments in a turbulent bubbly flow to study the nature of the transition between the classical −-5/3 energy spectrum scaling for a single-phase turbulent flow and the −-3 scaling for a swarm of bubbles rising in a quiescent liquid and of bubble-dominated turbulence. The bubblance parameter, which measures the ratio of the bubble-induced kinetic energy to the kinetic energy induced by the turbulent liquid fluctuations before bubble injection, is often used to characterise the bubbly flow. We vary the bubblance parameter from b=∞b = \infty (pseudo-turbulence) to b=0b = 0 (single-phase flow) over 2-3 orders of magnitude (0.01−50.01 - 5) to study its effect on the turbulent energy spectrum and liquid velocity fluctuations. The probability density functions (PDFs) of the liquid velocity fluctuations show deviations from the Gaussian profile for b>0b > 0, i.e. when bubbles are present in the system. The PDFs are asymmetric with higher probability in the positive tails. The energy spectra are found to follow the −-3 scaling at length scales smaller than the size of the bubbles for bubbly flows. This −-3 spectrum scaling holds not only in the well-established case of pseudo-turbulence, but surprisingly in all cases where bubbles are present in the system (b>0b > 0). Therefore, it is a generic feature of turbulent bubbly flows, and the bubblance parameter is probably not a suitable parameter to characterise the energy spectrum in bubbly turbulent flows. The physical reason is that the energy input by the bubbles passes over only to higher wave numbers, and the energy production due to the bubbles can be directly balanced by the viscous dissipation in the bubble wakes as suggested by Lance &\& Bataille (1991). In addition, we provide an alternative explanation by balancing the energy production of the bubbles with viscous dissipation in the Fourier space.Comment: J. Fluid Mech. (in press

    General Dynamic Scene Reconstruction from Multiple View Video

    Get PDF
    This paper introduces a general approach to dynamic scene reconstruction from multiple moving cameras without prior knowledge or limiting constraints on the scene structure, appearance, or illumination. Existing techniques for dynamic scene reconstruction from multiple wide-baseline camera views primarily focus on accurate reconstruction in controlled environments, where the cameras are fixed and calibrated and background is known. These approaches are not robust for general dynamic scenes captured with sparse moving cameras. Previous approaches for outdoor dynamic scene reconstruction assume prior knowledge of the static background appearance and structure. The primary contributions of this paper are twofold: an automatic method for initial coarse dynamic scene segmentation and reconstruction without prior knowledge of background appearance or structure; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes from multiple wide-baseline static or moving cameras. Evaluation is performed on a variety of indoor and outdoor scenes with cluttered backgrounds and multiple dynamic non-rigid objects such as people. Comparison with state-of-the-art approaches demonstrates improved accuracy in both multiple view segmentation and dense reconstruction. The proposed approach also eliminates the requirement for prior knowledge of scene structure and appearance

    Integration Mechanisms for Heading Perception

    Get PDF
    Previous studies of heading perception suggest that human observers employ spatiotemporal pooling to accommodate noise in optic flow stimuli. Here, we investigated how spatial and temporal integration mechanisms are used for judgments of heading through a psychophysical experiment involving three different types of noise. Furthermore, we developed two ideal observer models to study the components of the spatial information used by observers when performing the heading task. In the psychophysical experiment, we applied three types of direction noise to optic flow stimuli to differentiate the involvement of spatial and temporal integration mechanisms. The results indicate that temporal integration mechanisms play a role in heading perception, though their contribution is weaker than that of the spatial integration mechanisms. To elucidate how observers process spatial information to extract heading from a noisy optic flow field, we compared psychophysical performance in response to random-walk direction noise with that of two ideal observer models (IOMs). One model relied on 2D screen-projected flow information (2D-IOM), while the other used environmental, i.e., 3D, flow information (3D-IOM). The results suggest that human observers compensate for the loss of information during the 2D retinal projection of the visual scene for modest amounts of noise. This suggests the likelihood of a 3D reconstruction during heading perception, which breaks down under extreme levels of noise

    A robust lesion boundary segmentation algorithm using level set methods

    Get PDF
    This paper addresses the issue of accurate lesion segmentation in retinal imagery, using level set methods and a novel stopping mechanism - an elementary features scheme. Specifically, the curve propagation is guided by a gradient map built using a combination of histogram equalization and robust statistics. The stopping mechanism uses elementary features gathered as the curve deforms over time, and then using a lesionness measure, defined herein, ’looks back in time’ to find the point at which the curve best fits the real object. We compare the proposed method against five other segmentation algorithms performed on 50 randomly selected images of exudates with a database of clinician demarcated boundaries as ground truth

    Action recognition based on efficient deep feature learning in the spatio-temporal domain

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Hand-crafted feature functions are usually designed based on the domain knowledge of a presumably controlled environment and often fail to generalize, as the statistics of real-world data cannot always be modeled correctly. Data-driven feature learning methods, on the other hand, have emerged as an alternative that often generalize better in uncontrolled environments. We present a simple, yet robust, 2D convolutional neural network extended to a concatenated 3D network that learns to extract features from the spatio-temporal domain of raw video data. The resulting network model is used for content-based recognition of videos. Relying on a 2D convolutional neural network allows us to exploit a pretrained network as a descriptor that yielded the best results on the largest and challenging ILSVRC-2014 dataset. Experimental results on commonly used benchmarking video datasets demonstrate that our results are state-of-the-art in terms of accuracy and computational time without requiring any preprocessing (e.g., optic flow) or a priori knowledge on data capture (e.g., camera motion estimation), which makes it more general and flexible than other approaches. Our implementation is made available.Peer ReviewedPostprint (author's final draft

    Mathematical and computer modeling of electro-optic systems using a generic modeling approach

    Get PDF
    The conventional approach to modelling electro-optic sensor systems is to develop separate models for individual systems or classes of system, depending on the detector technology employed in the sensor and the application. However, this ignores commonality in design and in components of these systems. A generic approach is presented for modelling a variety of sensor systems operating in the infrared waveband that also allows systems to be modelled with different levels of detail and at different stages of the product lifecycle. The provision of different model types (parametric and image-flow descriptions) within the generic framework can allow valuable insights to be gained

    Feature Guided Image Registration Applied to Phase and Wavelet-Base Optic Flow

    Get PDF
    Optic Flow algorithms are useful in problems such as computers vision, navigational systems, and robotics. However, current algorithms are computationally expensive or lack the accuracy to be effective compared with traditionally navigation systems. Recently, lower accuracy inertial navigation systems (INS) based on Microelectromechanical systems (MEMS) technology have been proposed to replace more accurate traditional navigation systems
    • …
    corecore