8 research outputs found

    Language use in depressed and non-depressed mothers and their adolescent offspring

    Get PDF
    BackgroundApproximately 10% of mothers experience depression each year, which increases risk for depression in offspring. Currently no research has analysed the linguistic features of depressed mothers and their adolescent offspring during dyadic interactions. We examined the extent to which linguistic features of mothers’ and adolescents’ speech during dyadic interactional tasks could discriminate depressed from non-depressed mothers.MethodsComputer-assisted linguistic analysis (Linguistic Inquiry and Word Count; LIWC) was applied to transcripts of low-income mother-adolescent dyads (N=151) performing a lab-based problem-solving interaction task. One-way multivariate analyses were conducted to determine linguistic features hypothesized to be related to maternal depressive status that significantly differed in frequency between depressed and non-depressed mothers and higher and lower risk offspring. Logistic regression analyses were performed to classify between dyads belonging to the two groups.Results The results showed that linguistic features in mothers’ and their adolescent offsprings’ speech during problem-solving interactions discriminated between maternal depression status. Many, but not all effects, were consistent with those identified in previous research using primarily written text, highlighting the validity and reliability of language behaviour associated with depressive symptomatology across lab-based and natural environmental contexts.LimitationsOur analyses do not enable to ascertain how mothers’ language behaviour may have influenced their offspring’s communication patterns. We also cannot say how or whether these findings generalize to other contexts or populations.ConclusionThe findings extend the existing literature on linguistic features of depression by indicating that mothers’ depression is associated with linguistic behaviour during mother-adolescent interaction.<br/

    Pedestrian Detection in Low Quality Moving Camera Videos

    No full text
    Pedestrian detection is one of the most researched areas in computer vision and is rapidly gaining importance with the emergence of autonomous vehicles and steering assistance technology. Much work has been done in this field, ranging from the collection of extensive datasets to benchmarking of new technologies, but all the research depends on high-quality hardware such as high-resolution cameras, Light Detection and Ranging (LIDAR) and radar. For detection in low-quality moving camera videos, we use image deblurring techniques to reconstruct image frames and use existing pedestrian detection algorithms and compare our results with the leading research done in this area

    Pedestrian Detection in Low Quality Moving Camera Videos

    Get PDF
    Pedestrian detection is one of the most researched areas in computer vision and is rapidly gaining importance with the emergence of autonomous vehicles and steering assistance technology. Much work has been done in this field, ranging from the collection of extensive datasets to benchmarking of new technologies, but all the research depends on high-quality hardware such as high-resolution cameras, Light Detection and Ranging (LIDAR) and radar. For detection in low-quality moving camera videos, we use image deblurring techniques to reconstruct image frames and use existing pedestrian detection algorithms and compare our results with the leading research done in this area

    Analysis of Contextual Emotions Using Multimodal Data

    No full text
    Affective computing builds and evaluates systems that can recognize, interpret, and simulate human emotion. It is an interdisciplinary field, which includes computer science, psychology, and many others. For years, human emotion has been studied in psychology but recently has become a prominent field in computer science. Largely, the field of affective computing has been focused on analyzing static facial expressions to recognize human emotions, without taking bias (e.g. gender, data bias), context, or temporal information into account. Psychology has shown the difficulty of analyzing emotions without incorporating this type of information. In this dissertation, we have proposed new approaches to recognizing emotions by incorporating both contextual and temporal information, as well as approaches to mitigate data bias. More specifically, this dissertation has the following theoretical and application-based contributions: (1) The first work to recognize context using temporal dynamics from facial action units; (2) We recognize multiple self-reported emotions using facial expression-based videos; (3) A new approach to mitigate data bias in facial action units is proposed; and (4) Multimodal, temporal fusion of physiological signals and action units are used for emotion recognition. This dissertation has a wide range of applications in the fields including, but not limited to, medicine, security, and entertainment
    corecore