46,555 research outputs found

    Event-Based Motion Segmentation by Motion Compensation

    Full text link
    In contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds, where traditional cameras suffer from motion blur. However, distinguishing between events caused by different moving objects and by the camera's ego-motion is a challenging task. We present the first per-event segmentation method for splitting a scene into independently moving objects. Our method jointly estimates the event-object associations (i.e., segmentation) and the motion parameters of the objects (or the background) by maximization of an objective function, which builds upon recent results on event-based motion-compensation. We provide a thorough evaluation of our method on a public dataset, outperforming the state-of-the-art by as much as 10%. We also show the first quantitative evaluation of a segmentation algorithm for event cameras, yielding around 90% accuracy at 4 pixels relative displacement.Comment: When viewed in Acrobat Reader, several of the figures animate. Video: https://youtu.be/0q6ap_OSBA

    Experimental assessment of presumed filtered density function models

    Get PDF
    Measured filtered density functions (FDFs) as well as assumed beta distribution model of mixture fraction and “subgrid” scale (SGS) scalar variance, used typically in large eddy simulations, were studied by analysing experimental data, obtained from two-dimensional planar, laser induced fluorescence measurements in isothermal swirling turbulent flows at a constant Reynolds number of 29 000 for different swirl numbers (0.3, 0.58, and 1.07)

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Efficient and effective human action recognition in video through motion boundary description with a compact set of trajectories

    Get PDF
    Human action recognition (HAR) is at the core of human-computer interaction and video scene understanding. However, achieving effective HAR in an unconstrained environment is still a challenging task. To that end, trajectory-based video representations are currently widely used. Despite the promising levels of effectiveness achieved by these approaches, problems regarding computational complexity and the presence of redundant trajectories still need to be addressed in a satisfactory way. In this paper, we propose a method for trajectory rejection, reducing the number of redundant trajectories without degrading the effectiveness of HAR. Furthermore, to realize efficient optical flow estimation prior to trajectory extraction, we integrate a method for dynamic frame skipping. Experiments with four publicly available human action datasets show that the proposed approach outperforms state-of-the-art HAR approaches in terms of effectiveness, while simultaneously mitigating the computational complexity

    Ash plume properties retrieved from infrared images: a forward and inverse modeling approach

    Full text link
    We present a coupled fluid-dynamic and electromagnetic model for volcanic ash plumes. In a forward approach, the model is able to simulate the plume dynamics from prescribed input flow conditions and generate the corresponding synthetic thermal infrared (TIR) image, allowing a comparison with field-based observations. An inversion procedure is then developed to retrieve ash plume properties from TIR images. The adopted fluid-dynamic model is based on a one-dimensional, stationary description of a self-similar (top-hat) turbulent plume, for which an asymptotic analytical solution is obtained. The electromagnetic emission/absorption model is based on the Schwarzschild's equation and on Mie's theory for disperse particles, assuming that particles are coarser than the radiation wavelength and neglecting scattering. [...] Application of the inversion procedure to an ash plume at Santiaguito volcano (Guatemala) has allowed us to retrieve the main plume input parameters, namely the initial radius b0b_0, velocity U0U_0, temperature T0T_0, gas mass ratio n0n_0, entrainment coefficient kk and their related uncertainty. Moreover, coupling with the electromagnetic model, we have been able to obtain a reliable estimate of the equivalent Sauter diameter dsd_s of the total particle size distribution. The presented method is general and, in principle, can be applied to the spatial distribution of particle concentration and temperature obtained by any fluid-dynamic model, either integral or multidimensional, stationary or time-dependent, single or multiphase. The method discussed here is fast and robust, thus indicating potential for applications to real-time estimation of ash mass flux and particle size distribution, which is crucial for model-based forecasts of the volcanic ash dispersal process.Comment: 41 pages, 13 figures, submitted pape
    • …
    corecore