471 research outputs found
Jak uspołecznić robota: Organizacja przestrzenna i multimodalne interakcje semiotyczne w laboratorium robotyki społecznej
Badacze reprezentujący robotykę społeczną projektują swoje roboty tak, aby funkcjonowały one jako społeczni agenci w interakcji z ludźmi oraz z innymi robotami. Jakkolwiek nie przeczymy, że fizyczne cechy robota oraz jego oprogramowanie są istotne dla osiągniecia tego celu, pragniemy zwrócić uwagę na znaczenie organizacji przestrzennej oraz procesów koordynacji interakcji robota z ludźmi. Interakcje te badaliśmy, prowadząc obserwacje w [„rozszerzonym”] laboratorium robotyki społecznej. W tekście dokonujemy multimodalnej analizy interakcyjnej dwóch momentów praktyki projektantów robotów społecznych. Opisujemy kluczową rolę samych robotyków oraz grupy małych dzieci nieposługujących się jeszcze językiem, które zaangażowano w proces projektowania robota. Twierdzimy tu, że społeczny charakter projektowanej maszyny jest w istotny sposób powiązany z subtelnością ludzkich zachowań w laboratorium. To ludzkie zaangażowanie w proces tworzenia społecznego sprawstwa robota nie jest kwestią woli indywidualnych osób. Raczej jest tak, że dopasowania maszyn i ludzi wymaga dynamika sytuacyjna, w której osadzony jest robot
Automated drowsiness detection for improved driving safety
Several approaches were proposed for the detection and prediction of drowsiness. The approaches can be categorized as estimating the fitness of duty, modeling the sleep-wake rhythms, measuring the vehicle based performance and online operator monitoring. Computer vision based online operator monitoring approach has become prominent due to its predictive ability of detecting drowsiness. Previous studies with this approach detect driver drowsiness primarily by making preassumptions about the relevant behavior, focusing on blink rate, eye closure, and yawning. Here we employ machine learning to datamine actual human behavior during drowsiness episodes. Automatic classifiers
for 30 facial actions from the Facial Action Coding system were developed
using machine learning on a separate database of spontaneous expressions. These facial actions include blinking and yawn motions, as well as a number of other facial movements. In addition, head motion was collected through automatic eye tracking and an accelerometer. These measures were passed to learning-based classifiers such as Adaboost and multinomial ridge regression. The system was able to predict sleep and crash episodes during a driving computer game with 96% accuracy within subjects and above 90% accuracy across subjects. This is the highest prediction rate reported to date for detecting real drowsiness. Moreover, the analysis revealed new information about human behavior during drowsy drivin
Discrimination of moderate and acute drowsiness based on spontaneous facial expressions
It is important for drowsiness detection systems to identify different levels of drowsiness and respond appropriately at each level. This study explores how to
discriminate moderate from acute drowsiness by applying computer vision techniques to the human face. In our previous study, spontaneous facial expressions measured through computer vision techniques were used as an indicator to discriminate alert from acutely drowsy episodes. In this study we are exploring which facial muscle movements are predictive of moderate
and acute drowsiness. The effect of temporal dynamics of action units on prediction performances is explored by capturing temporal dynamics using an overcomplete representation of temporal Gabor Filters. In the final system we perform feature selection to build a classifier that can discriminate moderate drowsy from acute drowsy episodes. The system achieves a classification
rate of .96 A’ in discriminating moderately drowsy versus acutely drowsy episodes. Moreover the study reveals new information in facial behavior occurring during different stages of drowsiness
Revealing atomic resolution structural insights into membrane proteins in near-native environments by proton detected solid-state NMR
Discriminately Decreasing Discriminability with Learned Image Filters
In machine learning and computer vision, input images are often filtered to
increase data discriminability. In some situations, however, one may wish to
purposely decrease discriminability of one classification task (a "distractor"
task), while simultaneously preserving information relevant to another (the
task-of-interest): For example, it may be important to mask the identity of
persons contained in face images before submitting them to a crowdsourcing site
(e.g., Mechanical Turk) when labeling them for certain facial attributes.
Another example is inter-dataset generalization: when training on a dataset
with a particular covariance structure among multiple attributes, it may be
useful to suppress one attribute while preserving another so that a trained
classifier does not learn spurious correlations between attributes. In this
paper we present an algorithm that finds optimal filters to give high
discriminability to one task while simultaneously giving low discriminability
to a distractor task. We present results showing the effectiveness of the
proposed technique on both simulated data and natural face images
Recommended from our members
Cognition and the Statistics of Natural Signals
This paper illustiates how the statistical structure of natural signals may help understand cognitive phenomena. We focus on a regularity found in audio visual speech perception. Experiments by Massaro and colleagues consistently show that optic and acoustic speech signals have separable influences on perception. From a Bayesian point of view this regularity reflects a perceptual system that treats optic and acoustic speech as if they were conditionally independent signals. In this paper we perform a statistical analysis of a database of audiovisual speech to check whether optic and acoustic speech signals are indeed conditionally independent. If so, the regularities found by Massaro and colleagues could be seen as an optimal processing strategy of the perceptual system. We analyze a small database of audio visual speech using hidden Markov models, the most successful models in automatic speech recognition. The results suggest that acoustic and optic speech signals are indeed conditionally independent and that therefore, the separability found by Massaro and colleagues may be explained in terms of optimal perceptual processing: Independent processing of optic and acoustic speech results in no significant loss of information
- …
