22 research outputs found

    Midair Gestural Techniques for Translation Tasks in Large-Display Interaction

    Get PDF
    Midair gestural interaction has gained a lot of attention over the past decades, with numerous attempts to apply midair gestural interfaces with large displays (and TVs), interactive walls, and smart meeting rooms. These attempts, reviewed in numerous studies, utilized differing gestural techniques for the same action making them inherently incomparable, which further makes it difficult to summarize recommendations for the development of midair gestural interaction applications. Therefore, the aim was to take a closer look at one common action, translation, that is defined as dragging (or moving) an entity to a predefined target position while retaining the entity’s size and rotation. We compared performance and subjective experiences (participants = 30) of four midair gestural techniques (i.e., by fist, palm, pinch, and sideways) in the repetitive translation of 2D objects to short and long distances with a large display. The results showed statistically significant differences in movement time and error rate favoring translation by palm over pinch and sideways at both distances. Further, fist and sideways gestural techniques showed good performances, especially at short and long distances correspondingly. We summarize the implications of the results for the design of midair gestural interfaces, which would be useful for interaction designers and gesture recognition researchers.publishedVersionPeer reviewe

    Evaluation of Dry Electrodes in Canine Heart Rate Monitoring

    Get PDF
    The functionality of three dry electrocardiogram electrode constructions was evaluated by measuring canine heart rate during four different behaviors: Standing, sitting, lying and walking. The testing was repeated (n = 9) in each of the 36 scenarios with three dogs. Two of the electrodes were constructed with spring-loaded test pins while the third electrode was a molded polymer electrode with Ag/AgCl coating. During the measurement, a specifically designed harness was used to attach the electrodes to the dogs. The performance of the electrodes was evaluated and compared in terms of heartbeat detection coverage. The effect on the respective heart rate coverage was studied by computing the heart rate coverage from the measured electrocardiogram signal using a pattern-matching algorithm to extract the R-peaks and further the beat-to-beat heart rate. The results show that the overall coverage ratios regarding the electrodes varied between 45-95% in four different activity modes. The lowest coverage was for lying and walking and the highest was for standing and sitting.Peer reviewe

    Dog behaviour classification with movement sensors placed on the harness and the collar

    Get PDF
    Dog owners' understanding of the daily behaviour of their dogs may be enhanced by movement measurements that can detect repeatable dog behaviour, such as levels of daily activity and rest as well as their changes. The aim of this study was to evaluate the performance of supervised machine learning methods utilising accelerometer and gyroscope data provided by wearable movement sensors in classification of seven typical dog activities in a semi-controlled test situation. Forty-five middle to large sized dogs participated in the study. Two sensor devices were attached to each dog, one on the back of the dog in a harness and one on the neck collar. Altogether 54 features were extracted from the acceleration and gyroscope signals divided in two-second segments. The performance of four classifiers were compared using features derived from both sensor modalities. and from the acceleration data only. The results were promising; the movement sensor at the back yielded up to 91 % accuracy in classifying the dog activities and the sensor placed at the collar yielded 75 % accuracy at best. Including the gyroscope features improved the classification accuracy by 0.7-2.6 %, depending on the classifier and the sensor location. The most distinct activity was sniffing, whereas the static postures (lying on chest, sitting and standing) were the most challenging behaviours to classify, especially from the data of the neck collar sensor. The data used in this article as well as the signal processing scripts are openly available in Mendeley Data, https://doi.org/10.17632/vxhx934tbn.1.Peer reviewe

    Mid-Air Gestural Interaction with a Large Fogscreen

    Get PDF
    Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens.Peer reviewe

    Automatic Detection of Face and Facial Features from Images of Neutral and Expressive Faces

    Get PDF
    Merkittävä osa ihmisten välisestä luontevasta vuorovaikutuksesta on kasvotusten tapahtuvaa vuorovaikutusta. Kasvotusten tapahtuvan vuorovaikutuksen ilmiselvä tärkeys meille ihmisille on luonnollisesti motivoinut myös ajatuksia siitä, että kasvoilla havaittavaa informaatiota voitaisiin ja tulisikin hyödyntää myös ihmisen ja teknologian välisessä vuorovaikutuksessa. Automaattisesta kasvojen analysoinnista on konenäkötutkimuksen alalla tullut merkittävä tutkimuskohde. Tärkeitä tutkimusongelmia ovat kasvojen löytäminen, tärkeiden kasvojenpiirteiden löytäminen kasvoilta ja kasvonilmeiden tunnistaminen valo- ja videokuvista. Nyt käsillä olevan väitöskirjan tavoitteena oli tutkia kasvojen ja tärkeiden kasvoalueiden löytämistä valokuvista ja reaaliaikaisista videoista kasvoilla havaittavista ilmeistä riippumatta. Tämä on eräs automaattisen kasvoanalyysin keskeisimmistä osa-alueista sillä monet kasvoanalyysia hyödyntävät sovellukset perustuvat siihen, että tietyt kasvojen alueet löydetään luotettavasti. Väitöskirjatyössä kehitettiin menetelmä, jolla on mahdollista automaattisesti paikantaa kasvot ja kasvojen alueet, kuten kulmakarvat, silmät, nenä ja suun seutu, riippumatta kasvoilla olleista sosiaaliseen ja tunnesisältöiseen ilmaisuun liittyvistä ilmemuutoksista. Menetelmän toimivuutta testattiin ja sitä kehitettiin edelleen koesarjalla, jossa testimateriaalina käytettiin erilaisia kasvonilmetietokantoja. Tulokset osoittivat että menetelmä toimi hyvin ja luotettavasti automaattisen kasvojen ja kasvojen alueiden paikantamisessa sekä valokuvista että reaaliaikaisista videoista, joissa kasvojenilmeet vaihtelivat monipuolisesti.Most human natural interaction takes place through face-to-face communication. The obvious importance of facial stimuli for humans naturally motivates the idea of also utilizing facial information in and for human-technology interaction. In this type of interaction the user interacts and is actively observed by computational devices embedded in the environment. In computer vision research automatic face analysis has become a very active area of research. Important research problems are, for example, face detection, detection of facially important features, and facial expression recognition from static and real time images. The aim of the present thesis was to investigate one aspect of automatic face analysis, namely, the detection of face and facial features from static and real time facial displays irrespective of socially and emotionally communicative changes in those facial displays. During the course of this research work a framework for automatic and expression-invariant localization of faces and prominent facial landmarks, such as, eyes, eyebrows, nose, and mouth from static images and real-time videos was developed. The performance of the framework was evaluated on several databases of facial expressions coded in terms of prototypical facial displays, like happiness and surprise, and facial muscle activations presented alone or in combinations. In general, the results showed that the framework allowed the face and facial landmarks to be located automatically, robustly, and efficiently from static images and streaming videos displaying facial expressions of varying complexity

    Automatic Detection of Face and Facial Features from Images of Neutral and Expressive Faces

    Get PDF
    Merkittävä osa ihmisten välisestä luontevasta vuorovaikutuksesta on kasvotusten tapahtuvaa vuorovaikutusta. Kasvotusten tapahtuvan vuorovaikutuksen ilmiselvä tärkeys meille ihmisille on luonnollisesti motivoinut myös ajatuksia siitä, että kasvoilla havaittavaa informaatiota voitaisiin ja tulisikin hyödyntää myös ihmisen ja teknologian välisessä vuorovaikutuksessa. Automaattisesta kasvojen analysoinnista on konenäkötutkimuksen alalla tullut merkittävä tutkimuskohde. Tärkeitä tutkimusongelmia ovat kasvojen löytäminen, tärkeiden kasvojenpiirteiden löytäminen kasvoilta ja kasvonilmeiden tunnistaminen valo- ja videokuvista. Nyt käsillä olevan väitöskirjan tavoitteena oli tutkia kasvojen ja tärkeiden kasvoalueiden löytämistä valokuvista ja reaaliaikaisista videoista kasvoilla havaittavista ilmeistä riippumatta. Tämä on eräs automaattisen kasvoanalyysin keskeisimmistä osa-alueista sillä monet kasvoanalyysia hyödyntävät sovellukset perustuvat siihen, että tietyt kasvojen alueet löydetään luotettavasti. Väitöskirjatyössä kehitettiin menetelmä, jolla on mahdollista automaattisesti paikantaa kasvot ja kasvojen alueet, kuten kulmakarvat, silmät, nenä ja suun seutu, riippumatta kasvoilla olleista sosiaaliseen ja tunnesisältöiseen ilmaisuun liittyvistä ilmemuutoksista. Menetelmän toimivuutta testattiin ja sitä kehitettiin edelleen koesarjalla, jossa testimateriaalina käytettiin erilaisia kasvonilmetietokantoja. Tulokset osoittivat että menetelmä toimi hyvin ja luotettavasti automaattisen kasvojen ja kasvojen alueiden paikantamisessa sekä valokuvista että reaaliaikaisista videoista, joissa kasvojenilmeet vaihtelivat monipuolisesti.Most human natural interaction takes place through face-to-face communication. The obvious importance of facial stimuli for humans naturally motivates the idea of also utilizing facial information in and for human-technology interaction. In this type of interaction the user interacts and is actively observed by computational devices embedded in the environment. In computer vision research automatic face analysis has become a very active area of research. Important research problems are, for example, face detection, detection of facially important features, and facial expression recognition from static and real time images. The aim of the present thesis was to investigate one aspect of automatic face analysis, namely, the detection of face and facial features from static and real time facial displays irrespective of socially and emotionally communicative changes in those facial displays. During the course of this research work a framework for automatic and expression-invariant localization of faces and prominent facial landmarks, such as, eyes, eyebrows, nose, and mouth from static images and real-time videos was developed. The performance of the framework was evaluated on several databases of facial expressions coded in terms of prototypical facial displays, like happiness and surprise, and facial muscle activations presented alone or in combinations. In general, the results showed that the framework allowed the face and facial landmarks to be located automatically, robustly, and efficiently from static images and streaming videos displaying facial expressions of varying complexity
    corecore