16 research outputs found

    Aesthetic Highlight Detection in Movies Based on Synchronization of Spectators’ Reactions.

    Get PDF
    Detection of aesthetic highlights is a challenge for understanding the affective processes taking place during movie watching. In this paper we study spectators’ responses to movie aesthetic stimuli in a social context. Moreover, we look for uncovering the emotional component of aesthetic highlights in movies. Our assumption is that synchronized spectators’ physiological and behavioral reactions occur during these highlights because: (i) aesthetic choices of filmmakers are made to elicit specific emotional reactions (e.g. special effects, empathy and compassion toward a character, etc.) and (ii) watching a movie together causes spectators’ affective reactions to be synchronized through emotional contagion. We compare different approaches to estimation of synchronization among multiple spectators’ signals, such as pairwise, group and overall synchronization measures to detect aesthetic highlights in movies. The results show that the unsupervised architecture relying on synchronization measures is able to capture different properties of spectators’ synchronization and detect aesthetic highlights based on both spectators’ electrodermal and acceleration signals. We discover that pairwise synchronization measures perform the most accurately independently of the category of the highlights and movie genres. Moreover, we observe that electrodermal signals have more discriminative power than acceleration signals for highlight detection

    Many-objectives optimization: a machine learning approach for reducing the number of objectives

    Get PDF
    Solving real-world multi-objective optimization problems using Multi-Objective Optimization Algorithms becomes difficult when the number of objectives is high since the types of algorithms generally used to solve these problems are based on the concept of non-dominance, which ceases to work as the number of objectives grows. This problem is known as the curse of dimensionality. Simultaneously, the existence of many objectives, a characteristic of practical optimization problems, makes choosing a solution to the problem very difficult. Different approaches are being used in the literature to reduce the number of objectives required for optimization. This work aims to propose a machine learning methodology, designated by FS-OPA, to tackle this problem. The proposed methodology was assessed using DTLZ benchmarks problems suggested in the literature and compared with similar algorithms, showing a good performance. In the end, the methodology was applied to a difficult real problem in polymer processing, showing its effectiveness. The algorithm proposed has some advantages when compared with a similar algorithm in the literature based on machine learning (NL-MVU-PCA), namely, the possibility for establishing variable–variable and objective–variable relations (not only objective–objective), and the elimination of the need to define/chose a kernel neither to optimize algorithm parameters. The collaboration with the DM(s) allows for the obtainment of explainable solutions.This research was funded by POR Norte under the PhD Grant PRT/BD/152192/2021. The authors also acknowledge the funding by FEDER funds through the COMPETE 2020 Programme and National Funds through FCT (Portuguese Foundation for Science and Technology) under the projects UIDB/05256/2020, and UIDP/05256/2020, the Center for Mathematical Sciences Applied to Industry (CeMEAI) and the support from the São Paulo Research Foundation (FAPESP grant No 2013/07375-0, the Center for Artificial Intelligence (C4AI-USP), the support from the São Paulo Research Foundation (FAPESP grant No 2019/07665-4) and the IBM Corporation

    Recurrence complexity analysis of oscillatory signals with application to general anesthesia EEG signals

    Get PDF
    Submitted to Physics Letters ARecurrence structures in univariate time series are challenging to detect. We propose a combination of recurrence and symbolic analysis in order to identify such structures in a univariate signal. This method allows to obtain symbolic representation of the signal and quantify it by calculating its complexity measure. To this end, we propose a novel method of phase space reconstruction based on the signal's time-frequency representation and show that the proposed method outperforms conventional phase space reconstruction by delay embedding techniques. We evaluate our method on synthetic data and show its application to experimental EEG signals

    Detecting synchrony in EEG: A comparative study of functional connectivity measures

    Get PDF
    © 2018 Elsevier Ltd. This manuscript version is made available under the CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0/ This author accepted manuscript is made available following 12 month embargo from date of publication (December 2018) in accordance with the publisher’s archiving policyIn neuroscience, there is considerable current interest in investigating the connections between different parts of the brain. EEG is one modality for examining brain function, with advantages such as high temporal resolution and low cost. Many measures of connectivity have been proposed, but which is the best measure to use? In this paper, we address part of this question: which measure is best able to detect connections that do exist, in the challenging situation of non-stationary and noisy data from nonlinear systems, like EEG. This requires knowledge of the true relationship between signals, hence we compare 26 measures of functional connectivity on simulated data (unidirectionally coupled Hénon maps, and simulated EEG). To determine whether synchrony is detected, surrogate data were generated and analysed, and a threshold determined from the surrogate ensemble. No measure performed best in all tested situations. The correlation and coherence measures performed best on stationary data with many samples. S-estimator, correntropy, mean-phase coherence (Hilbert), mutual information (kernel), nonlinear interdependence (S) and nonlinear interdependence (N) performed most reliably on non-stationary data with small to medium window sizes. Of these, correlation and S-estimator have execution times that scale slower with the number of channels and the number of samples

    An Examination of Some Signi cant Approaches to Statistical Deconvolution

    No full text
    We examine statistical approaches to two significant areas of deconvolution - Blind Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals. For BD, we review some major classical and new methods in a unified framework of nonGaussian signals. The first class of algorithms we look at falls into the class of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities between them despite differences in origins and motivations. We give new theoretical results concerning the behaviour and generality of these algorithms and give evidence of scenarios where they may fail. In some cases, we present new modifications to the algorithms to overcome these shortfalls. Following our discussion on the MED algorithms, we next look at a recently proposed BD algorithm based on the correntropy function, a function defined as a combination of the autocorrelation and the entropy functiosn. We examine its BD performance when compared with MED algorithms. We find that the BD carried out via correntropy-matching cannot be straightforwardly interpreted as simultaneous moment-matching due to the breakdown of the correntropy expansion in terms of moments. Other issues such as maximum/minimum phase ambiguity and computational complexity suggest that careful attention is required before establishing the correntropy algorithm as a superior alternative to the existing BD techniques. For the problem of RD, we give a categorisation of different kinds of uncertainties encountered in estimation and discuss techniques required to solve each individual case. Primarily, we tackle the overlooked cases of robustification of deconvolution filters based on estimated blurring response or estimated signal spectrum. We do this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs to the standard plug-in Wiener Filter (WF)
    corecore