55,460 research outputs found

    Measurement and physical interpretation of the mean motion of turbulent density patterns detected by the BES system on MAST

    Full text link
    The mean motion of turbulent patterns detected by a two-dimensional (2D) beam emission spectroscopy (BES) diagnostic on the Mega Amp Spherical Tokamak (MAST) is determined using a cross-correlation time delay (CCTD) method. Statistical reliability of the method is studied by means of synthetic data analysis. The experimental measurements on MAST indicate that the apparent mean poloidal motion of the turbulent density patterns in the lab frame arises because the longest correlation direction of the patterns (parallel to the local background magnetic fields) is not parallel to the direction of the fastest mean plasma flows (usually toroidal when strong neutral beam injection is present). The experimental measurements are consistent with the mean motion of plasma being toroidal. The sum of all other contributions (mean poloidal plasma flow, phase velocity of the density patterns in the plasma frame, non-linear effects, etc.) to the apparent mean poloidal velocity of the density patterns is found to be negligible. These results hold in all investigated L-mode, H-mode and internal transport barrier (ITB) discharges. The one exception is a high-poloidal-beta (the ratio of the plasma pressure to the poloidal magnetic field energy density) discharge, where a large magnetic island exists. In this case BES detects very little motion. This effect is currently theoretically unexplained.Comment: 28 pages, 15 figures, submitted to PPC

    The aceToolbox: low-level audiovisual feature extraction for retrieval and classification

    Get PDF
    In this paper we present an overview of a software platform that has been developed within the aceMedia project, termed the aceToolbox, that provides global and local lowlevel feature extraction from audio-visual content. The toolbox is based on the MPEG-7 eXperimental Model (XM), with extensions to provide descriptor extraction from arbitrarily shaped image segments, thereby supporting local descriptors reflecting real image content. We describe the architecture of the toolbox as well as providing an overview of the descriptors supported to date. We also briefly describe the segmentation algorithm provided. We then demonstrate the usefulness of the toolbox in the context of two different content processing scenarios: similarity-based retrieval in large collections and scene-level classification of still images

    Going Deeper into Action Recognition: A Survey

    Full text link
    Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to human-computer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader
    corecore