560 research outputs found

    Determination and evaluation of clinically efficient stopping criteria for the multiple auditory steady-state response technique

    Get PDF
    Background: Although the auditory steady-state response (ASSR) technique utilizes objective statistical detection algorithms to estimate behavioural hearing thresholds, the audiologist still has to decide when to terminate ASSR recordings introducing once more a certain degree of subjectivity. Aims: The present study aimed at establishing clinically efficient stopping criteria for a multiple 80-Hz ASSR system. Methods: In Experiment 1, data of 31 normal hearing subjects were analyzed off-line to propose stopping rules. Consequently, ASSR recordings will be stopped when (1) all 8 responses reach significance and significance can be maintained for 8 consecutive sweeps; (2) the mean noise levels were ≤ 4 nV (if at this “≤ 4-nV” criterion, p-values were between 0.05 and 0.1, measurements were extended only once by 8 sweeps); and (3) a maximum amount of 48 sweeps was attained. In Experiment 2, these stopping criteria were applied on 10 normal hearing and 10 hearing-impaired adults to asses the efficiency. Results: The application of these stopping rules resulted in ASSR threshold values that were comparable to other multiple-ASSR research with normal hearing and hearing-impaired adults. Furthermore, in 80% of the cases, ASSR thresholds could be obtained within a time-frame of 1 hour. Investigating the significant response-amplitudes of the hearing-impaired adults through cumulative curves indicated that probably a higher noise-stop criterion than “≤ 4 nV” can be used. Conclusions: The proposed stopping rules can be used in adults to determine accurate ASSR thresholds within an acceptable time-frame of about 1 hour. However, additional research with infants and adults with varying degrees and configurations of hearing loss is needed to optimize these criteria

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression

    Get PDF
    Recent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brain's reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.Peer Reviewe

    The proximate mechanisms and ultimate functions of smiles

    Get PDF
    Niedenthal et al's classification of smiles erroneously conflates psychological mechanisms and adaptive functions. This confusion weakens the rationale behind the types of smiles they chose to individuate, and it obfuscates the distinction between the communicative versus denotative nature of smiles and the role of perceived-gaze direction in emotion recognitio

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Motivational aspects of recognizing a smile

    Get PDF
    What are the underlying processes that enable human beings to recognize a happy face? Clearly, featural and configural cues will help to identify the distinctive smile. In addition, the motivational state of the observer will influence the interpretation of emotional expressions. Therefore, a model accounting for emotion recognition is only complete if bottom-up and top-down aspects are integrate

    How does perceiving eye direction modulate emotion recognition?

    Get PDF
    Niedenthal et al. postulate that eye contact with the expresser of an emotion automatically initiates embodied simulation. Our commentary explores the generality of such an eye contact effect for emotions other than happiness. Based on the appraisal theory of emotion, we propose that embodied simulation may be reinforced by mutual or averted gaze as a function of emotional contex

    Face modeling for face recognition in the wild.

    Get PDF
    Face understanding is considered one of the most important topics in computer vision field since the face is a rich source of information in social interaction. Not only does the face provide information about the identity of people, but also of their membership in broad demographic categories (including sex, race, and age), and about their current emotional state. Facial landmarks extraction is the corner stone in the success of different facial analyses and understanding applications. In this dissertation, a novel facial modeling is designed for facial landmarks detection in unconstrained real life environment from different image modalities including infra-red and visible images. In the proposed facial landmarks detector, a part based model is incorporated with holistic face information. In the part based model, the face is modeled by the appearance of different face part(e.g., right eye, left eye, left eyebrow, nose, mouth) and their geometric relation. The appearance is described by a novel feature referred to as pixel difference feature. This representation is three times faster than the state-of-art in feature representation. On the other hand, to model the geometric relation between the face parts, the complex Bingham distribution is adapted from the statistical community into computer vision for modeling the geometric relationship between the facial elements. The global information is incorporated with the local part model using a regression model. The model results outperform the state-of-art in detecting facial landmarks. The proposed facial landmark detector is tested in two computer vision problems: boosting the performance of face detectors by rejecting pseudo faces and camera steering in multi-camera network. To highlight the applicability of the proposed model for different image modalities, it has been studied in two face understanding applications which are face recognition from visible images and physiological measurements for autistic individuals from thermal images. Recognizing identities from faces under different poses, expressions and lighting conditions from a complex background is an still unsolved problem even with accurate detection of landmark. Therefore, a learning similarity measure is proposed. The proposed measure responds only to the difference in identities and filter illuminations and pose variations. similarity measure makes use of statistical inference in the image plane. Additionally, the pose challenge is tackled by two new approaches: assigning different weights for different face part based on their visibility in image plane at different pose angles and synthesizing virtual facial images for each subject at different poses from single frontal image. The proposed framework is demonstrated to be competitive with top performing state-of-art methods which is evaluated on standard benchmarks in face recognition in the wild. The other framework for the face understanding application, which is a physiological measures for autistic individual from infra-red images. In this framework, accurate detecting and tracking Superficial Temporal Arteria (STA) while the subject is moving, playing, and interacting in social communication is a must. It is very challenging to track and detect STA since the appearance of the STA region changes over time and it is not discriminative enough from other areas in face region. A novel concept in detection, called supporter collaboration, is introduced. In support collaboration, the STA is detected and tracked with the help of face landmarks and geometric constraint. This research advanced the field of the emotion recognition

    Resource for a brief early somatic intervention to reduce symptoms of post-traumatic stress for victims of violence crime in acute hospital settings in Southeast Los Angeles

    Get PDF
    The goal of this dissertation was to create an intervention resource guide containing recommendations that can be utilized by the South Los Angeles Trauma Recovery Center (SLATRC) to implement a brief early somatic intervention for victims of violent crime in acute hospital settings in Southeast Los Angeles. It was decided that an intervention based upon the Trauma Resiliency Model/Community Resiliency Model (TRM/CRM) would best fit the needs of the population. First, a systematic literature review was conducted to increase the efficacy of the resource by gathering information on similar interventions. The specific questions guiding this review were as follows: (1) What are the components of a successful somatic intervention for the treatment of PTSD? (2) What therapeutic/client/systemic factors contribute to a somatic intervention’s effectiveness in treating trauma? (3) What factors contribute to reduced effectiveness? (4) What are the potential benefits and limitations of somatic interventions for treating trauma in a hospital-based setting? Literature to guide resource creation was systematically searched for using a strategy informed by the Joanna Briggs Institute (JBI) approach, an evidence-based systematic review process for healthcare research developed by the Joanna Briggs Institute. After reviewing available review protocols, the reviewer selected elements of the JBI approach that seemed to best suit the needs of this project. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) was selected as the review protocol. A PRISMA checklist was utilized by the reviewer to guide the systematic literature review process. Data from eligible and relevant studies were extracted and stored in a data table. There was emphasis on data that could be used to answer this project’s research questions. Results from the systematic literature review guided the creation of the proposed resource. The recommendations are intended to be utilized by xvi professional and paraprofessional staff at the SLATRC and will be distributed in the form of a training manual. After the resource creation stage Gabriela Ochoa, LMFT, the Coordinator of SLATRC, reviewed the proposed resource and provided feedback to the author. Elaine Miller-Karas LCSW, an expert in somatic interventions for trauma and the executive director and co-founder of the Trauma Resource Institute, served as the second external reviewer. External reviewers provided feedback based on the review form which asked them to rate on a 1-5 scale: how user-friendly the resource is, whether the instructions are clear and easy to follow, how viable and sustainable the manual is, and how consistent it is with current policies at the St. Francis trauma unit OR current knowledge in the field of somatic interventions for trauma. In addition, they were asked to give their opinion on the manual’s strengths, any potential barriers to implementation, whether the philosophy and approach of the resource is consistent with the aims of SLATRC, and whether interventions are culturally sensitive and relevant to target population. Feedback from reviewers was implemented in the resource
    • …
    corecore