34,200 research outputs found

    A data-driven framework for dimensionality reduction and causal inference in climate fields

    Full text link
    We propose a data-driven framework to simplify the description of spatiotemporal climate variability into few entities and their causal linkages. Given a high-dimensional climate field, the methodology first reduces its dimensionality into a set of regionally constrained patterns. Time-dependent causal links are then inferred in the interventional sense through the fluctuation-response formalism, as shown in Baldovin et al. (2020). These two steps allow to explore how regional climate variability can influence remote locations. To distinguish between true and spurious responses, we propose a novel analytical null model for the fluctuation-dissipation relation, therefore allowing for uncertainty estimation at a given confidence level. Finally, we select a set of metrics to summarize the results, offering a useful and simplified approach to explore climate dynamics. We showcase the methodology on the monthly sea surface temperature field at global scale. We demonstrate the usefulness of the proposed framework by studying few individual links as well as "link maps", visualizing the cumulative degree of causation between a given region and the whole system. Finally, each pattern is ranked in terms of its "causal strength", quantifying its relative ability to influence the system's dynamics. We argue that the methodology allows to explore and characterize causal relationships in high-dimensional spatiotemporal fields in a rigorous and interpretable way

    Video semantic content analysis framework based on ontology combined MPEG-7

    Get PDF
    The rapid increase in the available amount of video data is creating a growing demand for efficient methods for understanding and managing it at the semantic level. New multimedia standard, MPEG-7, provides the rich functionalities to enable the generation of audiovisual descriptions and is expressed solely in XML Schema which provides little support for expressing semantic knowledge. In this paper, a video semantic content analysis framework based on ontology combined MPEG-7 is presented. Domain ontology is used to define high level semantic concepts and their relations in the context of the examined domain. MPEG-7 metadata terms of audiovisual descriptions and video content analysis algorithms are expressed in this ontology to enrich video semantic analysis. OWL is used for the ontology description. Rules in Description Logic are defined to describe how low-level features and algorithms for video analysis should be applied according to different perception content. Temporal Description Logic is used to describe the semantic events, and a reasoning algorithm is proposed for events detection. The proposed framework is demonstrated in sports video domain and shows promising results

    Basic gestures as spatiotemporal reference frames for repetitive dance/music patterns in samba and charleston

    Get PDF
    THE GOAL OF THE PRESENT STUDY IS TO GAIN BETTER insight into how dancers establish, through dancing, a spatiotemporal reference frame in synchrony with musical cues. With the aim of achieving this, repetitive dance patterns of samba and Charleston were recorded using a three-dimensional motion capture system. Geometric patterns then were extracted from each joint of the dancer's body. The method uses a body-centered reference frame and decomposes the movement into non-orthogonal periodicities that match periods of the musical meter. Musical cues (such as meter and loudness) as well as action-based cues (such as velocity) can be projected onto the patterns, thus providing spatiotemporal reference frames, or 'basic gestures,' for action-perception couplings. Conceptually speaking, the spatiotemporal reference frames control minimum effort points in action-perception couplings. They reside as memory patterns in the mental and/or motor domains, ready to be dynamically transformed in dance movements. The present study raises a number of hypotheses related to spatial cognition that may serve as guiding principles for future dance/music studies

    Predictive Encoding of Contextual Relationships for Perceptual Inference, Interpolation and Prediction

    Full text link
    We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework. The model learns latent contextual representations by maximizing the predictability of visual events based on local and global contextual information through both top-down and bottom-up processes. In contrast to standard predictive coding models, the prediction error in this model is used to update the contextual representation but does not alter the feedforward input for the next layer, and is thus more consistent with neurophysiological observations. We establish the computational feasibility of this model by demonstrating its ability in several aspects. We show that our model can outperform state-of-art performances of gated Boltzmann machines (GBM) in estimation of contextual information. Our model can also interpolate missing events or predict future events in image sequences while simultaneously estimating contextual information. We show it achieves state-of-art performances in terms of prediction accuracy in a variety of tasks and possesses the ability to interpolate missing frames, a function that is lacking in GBM
    • 

    corecore