263,491 research outputs found

    Neural analysis of seismic data: applications to the monitoring of Mt. Vesuvius

    Get PDF
    The computing techniques currently available for the seismic monitoring allow advanced analysis. However, the correct event classification remains a critical aspect for the reliability of real time automatic analysis. Among the existing methods, neural networks may be considered efficient tools for detection and discrimination, and may be integrated into intelligent systems for the automatic classification of seismic events. In this work we apply an unsupervised technique for analysis and classification of seismic signals recorded in the Mt. Vesuvius area in order to improve the automatic event detection. The examined dataset contains about 1500 records divided into four typologies of events: earthquakes, landslides, artificial explosions, and “other” (any other signals not included in the previous classes). First, the Linear Predictive Coding (LPC) and a waveform parametrization have been applied to achieve a significant and compact data encoding. Then, the clustering is obtained using a Self-Organizing Map (SOM) neural network which does not require an a-priori classification of the seismic signals, groups those with similar structures, providing a simple framework for understanding the relationships between them. The resulting SOM map is separated into different areas, each one containing the events of a defined type. This means that the SOM discriminates well the four classes of seismic signals. Moreover, the system will classify a new input pattern depending on its position on the SOM map. The proposed approach can be an efficient instrument for the real time automatic analysis of seismic data, especially in the case of possible volcanic unrest

    Incorporating Device Context In Natural Language Understanding

    Get PDF
    Automatic speech recognition (ASR) models are used to recognize user commands or queries in products such as smartphones, smart speakers/displays, and other products that enable speech interaction. Automatic speech recognition is a complex problem that requires correct processing of the acoustic and semantic signals from the voice input. Natural language understanding (NLU) systems sometimes fail to correctly interpret utterances that are associated with multiple possible intents. Per techniques described herein, device context features such as the identity of the foreground application and other information is utilized to disambiguate intent for a voice query. Incorporating device context as input to NLU models leads to improvement in the ability of the NLU models to correctly interpret utterances with ambiguous intent

    The influence of facial signals on the automatic imitation on hand actions

    Get PDF
    Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate “in the moment” states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Refactoring facial expressions: an automatic analysis of natural occurring facial expressions in iterative social dilemma

    Get PDF
    Many automatic facial expression recognizers now output individual facial action units (AUs), but several lines of evidence suggest that it is the combination of AUs that is psychologically meaningful: e.g., (a) constraints arising from facial morphology, (b) prior published evidence, (c) claims arising from basic emotion theory. We performed factor analysis on a large data set and recovered factors that have been discussed in the literature as psychologically meaningful. Further we show that some of these factors have external validity in that they predict participant behaviors in an iterated prisoner’s dilemma task and in fact with more precision than the individual AUs. These results both reinforce the validity of automatic recognition (as these factors would be expected from accurate AU detection) and suggest the benefits of using such factors for understanding these facial expressions as social signals

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field
    • 

    corecore