411 research outputs found

    Asymptotic for a semilinear hyperbolic equation with asymptotically vanishing damping term, convex potential, and integrable source

    Full text link
    We investigate the long time behavior of solutions to semilinear hyperbolic equation (Eα_{\alpha}): u′′(t)+γ(t)u′(t)+Au(t)+f(u(t))=g(t), t≥0, u^{\prime\prime}(t)+\gamma(t)u^{\prime}(t)+Au(t)+f(u(t))=g(t),~t\geq0, where AA is a self-adjoint nonnegative operator, ff a function which derives from a convex function, and γ\gamma a nonnegative function which behaviors, for tt large enough, as Ktα\frac{K}{t^{\alpha}} with K>0K>0 and α∈[0,1[.\alpha \in\lbrack0,1[. We obtain sufficient conditions on the source term g(t),g(t), ensuring the weak or the strong convergence of any solution u(t)u(t) of (Eα_{\alpha}) as t→+∞t\rightarrow+\infty to a solution of the stationary equation Av+f(v)=0Av+f(v)=0 if one exists.Comment: 13 page

    Non self-adjoint laplacians on a directed graph

    Full text link
    We consider a non self-adjoint Laplacian on a directed graph with non symmetric edge weights. We analyse spectral properties of this Laplacian under a Kirchhoff assumption. Moreover we establish isoperimet-ric inequalities in terms of the numerical range to show the absence of the essential spectrum of the Laplacian on heavy end directed graphs

    Temporal contextual descriptors and applications to emotion analysis.

    Get PDF
    The current trends in technology suggest that the next generation of services and devices allows smarter customization and automatic context recognition. Computers learn the behavior of the users and can offer them customized services depending on the context, location, and preferences. One of the most important challenges in human-machine interaction is the proper understanding of human emotions by machines and automated systems. In the recent years, the progress made in machine learning and pattern recognition led to the development of algorithms that are able to learn the detection and identification of human emotions from experience. These algorithms use different modalities such as image, speech, and physiological signals to analyze and learn human emotions. In many settings, the vocal information might be more available than other modalities due to widespread of voice sensors in phones, cars, and computer systems in general. In emotion analysis from speech, an audio utterance is represented by an ordered (in time) sequence of features or a multivariate time series. Typically, the sequence is further mapped into a global descriptor representative of the entire utterance/sequence. This descriptor is used for classification and analysis. In classic approaches, statistics are computed over the entire sequence and used as a global descriptor. This often results in the loss of temporal ordering from the original sequence. Emotion is a succession of acoustic events. By discarding the temporal ordering of these events in the mapping, the classic approaches cannot detect acoustic patterns that lead to a certain emotion. In this dissertation, we propose a novel feature mapping framework. The proposed framework maps temporally ordered sequence of acoustic features into data-driven global descriptors that integrate the temporal information from the original sequence. The framework contains three mapping algorithms. These algorithms integrate the temporal information implicitly and explicitly in the descriptor\u27s representation. In the rst algorithm, the Temporal Averaging Algorithm, we average the data temporally using leaky integrators to produce a global descriptor that implicitly integrates the temporal information from the original sequence. In order to integrate the discrimination between classes in the mapping, we propose the Temporal Response Averaging Algorithm which combines the temporal averaging step of the previous algorithm and unsupervised learning to produce data driven temporal contextual descriptors. In the third algorithm, we use the topology preserving property of the Self-Organizing Maps and the continuous nature of speech to map a temporal sequence into an ordered trajectory representing the behavior over time of the input utterance on a 2-D map of emotions. The temporal information is integrated explicitly in the descriptor which makes it easier to monitor emotions in long speeches. The proposed mapping framework maps speech data of different length to the same equivalent representation which alleviates the problem of dealing with variable length temporal sequences. This is advantageous in real time setting where the size of the analysis window can be variable. Using the proposed feature mapping framework, we build a novel data-driven speech emotion detection and recognition system that indexes speech databases to facilitate the classification and retrieval of emotions. We test the proposed system using two datasets. The first corpus is acted. We showed that the proposed mapping framework outperforms the classic approaches while providing descriptors that are suitable for the analysis and visualization of humans’ emotions in speech data. The second corpus is an authentic dataset. In this dissertation, we evaluate the performances of our system using a collection of debates. For that purpose, we propose a novel debate collection that is one of the first initiatives in the literature. We show that the proposed system is able to learn human emotions from debates
    • …
    corecore