350 research outputs found
Daily-, estral- and age-dependent regulation of RFRP-3 neurons and their role in luteinizing hormone secretion in female mice
Female reproductive success relies on proper integration of circadian- and ovarian- signals to the hypothalamic-pituitary-gonadal axis in order to synchronize the pre-ovulatory LH surge with the onset of the main active period. In this study, we assessed in female mice whether hypothalamic neurons expressing the gonadotropin inhibitory peptide, RFRP-3, exhibit daily-, estrous stage-, and age-dependent variations. Furthermore, we investigated whether arginine vasopressin (AVP) and vasoactive intestinal peptide (VIP), two circadian peptides produced by the suprachiasmatic nucleus, regulate RFRP-3 neuronal activity. In young mice, the number of activated (c-Fosāpositive) RFRP-3 neurons was reduced at the day-to-night transition with no difference between diestrus and proestrus. Contrastingly, RFRP neuron firing rate was higher in proestrus as compared to diestrus, independently of the time of the day. AVP fibers contacted RFRP neurons with the highest density observed during the late afternoon of diestrus and proestrus, while AVP application increased RFRP neuron firing in the afternoon of diestrus, but not of proestrus. By contrast, we found no daily variation and no effect of VIP on RFRP neurons. Moreover, we demonstrated an age-dependent decrease in the total and activated number of RFRP-3 neurons. Finally, we reported that the daily variations in the activated RFRP-3 neurons and number of RFRP-3 neurons with close AVP and VIP fiber appositions were abolished in old mice. In conclusion, RFRP neurons integrate both daily and estrogenic signals in young, but not in old mice, a phenomenon which could be implicated in the aging of the reproductive axis
Congenital Adrenal Hyperplasia: A Case Report with Premature Teeth Exfoliation and Bone Resorption
Congenital adrenal hyperplasia (CAH) is an inherited autosomal recessive disorder characterized by insufficient production of cortisol. The aim of this case report was to present a child with CAH, premature exfoliation of primary teeth and accelerated eruption of his permanent teeth related to bone resorption. A 4.5-year-old Caucasian boy with CAH and long-term administration of glucocorticoids was referred for dental restoration. Clinical examination revealed primary molars with worn stainless steel crowns, severe attrition of the upper canines, and absence of the upper incisors. Before the completion of treatment, abnormal mobility of the first upper primary molars and the lower incisors was detected, and a few days later the teeth exfoliated prematurely. Histologic examination revealed normal tooth structure. Alkaline phosphatase and blood cells values were normal. Eruption of the permanent dentition was also accelerated. Tooth mobility was noticed in the permanent teeth as soon as they erupted, along with bone destruction. Examination revealed an elevated level of receptor activator of nuclear factor-kB ligand and lower-than-normal osteoprotegerin and vitamin D levels. The patient was treated with vitamin D supplements, and his teeth have been stable ever since. CAH is a serious chronic disorder appearing in children with accelerated dental development and possibly premature loss of primary teeth
Exploring the Suitability of Semantic Spaces as Word Association Models for the Extraction of Semantic Relationships
Given the recent advances and progress in Natural Language Processing (NLP), extraction of semantic relationships has been at the top of the research agenda in the last few years. This work has been mainly motivated by the fact that building knowledge graphs (KG) and bases (KB), as a key ingredient of intelligent applications, is a never-ending challenge, since new knowledge needs to be harvested while old knowledge needs to be revised. Currently, approaches towards relation extraction from text are dominated by neural models practicing some sort of distant (weak) supervision in machine learning from large corpora, with or without consulting external knowledge sources. In this paper, we empirically study and explore the potential of a novel idea of using classical semantic spaces and models, e.g., Word Embedding, generated for extracting word association, in conjunction with relation extraction approaches. The goal is to use these word association models to reinforce current relation extraction approaches. We believe that this is a first attempt of this kind and the results of the study should shed some light on the extent to which these word association models can be used as well as the most promising types of relationships to be considered for extraction
Assessing the Effectiveness of Automated Emotion Recognition in Adults and Children for Clinical Investigation
Recent success stories in automated object or face recognition, partly fuelled by deep learning artiļ¬cial neural network (ANN) architectures, has led to the advancement of biometric research platforms and, to some extent, the resurrection of Artiļ¬cial Intelligence (AI). In line with this general trend, inter-disciplinary approaches have taken place to automate the recognition of emotions in adults or children for the beneļ¬t of various applications such as identiļ¬cation of children emotions prior to a clinical investigation. Within this context, it turns out that automating emotion recognition is far from being straight forward with several challenges arising for both science(e.g., methodology underpinned by psychology) and technology (e.g., iMotions biometric research platform). In this paper, we present a methodology, experiment and interesting ļ¬ndings, which raise the following research questions for the recognition of emotions and attention in humans: a) adequacy of well-established techniques such as the International Affective Picture System (IAPS), b) adequacy of state-of-the-art biometric research platforms, c) the extent to which emotional responses may be different among children or adults. Our ļ¬ndings and ļ¬rst attempts to answer some of these research questions, are all based on a mixed sample of adults and children, who took part in the experiment resulting into a statistical analysis of numerous variables. These are related with, both automatically and interactively, captured responses of participants to a sample of IAPS pictures
Feature Extraction Techniques for Human Emotion Identification from Face Images
Emotion recognition has been one of the stimulating issues over the years due to the irregularities in the complexity of models and unpredictability between expression categories. So many Emotion detection algorithms have developed in the last two decades and still facing problems in accuracy, complexity and real-world implementation. In this paper, we propose two feature extraction techniques: Mouth region-based feature extraction and Maximally Stable Extremal Regions (MSER) method. In Mouth based feature extraction method mouth area is calculated and based on that value the emotions are classified. In the MSER method, the features are extracted by using connecting components and then the extracted features are given to a simple ANN for classification. Experimental results shows that the Mouth area based feature extraction method gives 86% accuracy and MSER based feature extraction method outperforms it by achieving 89% accuracy on DEAP. Thus, it can be concluded that the proposed methods can be effectively used for emotion detection
Iris Image Recognition using Optimized Kohonen Self Organizing Neural Network
The pursuit to develop an effective people management system has widened over the years to manage the enormous increase in population. Any management system includes identification, verification and recognition stages. Iris recognition has become notable biometrics to support the management system due to its versatility and non-invasive approach. These systems help to identify the individual with the texture information distributed around the iris region. Many classification algorithms are available to help in iris recognition. But those are very sophisticated and require heavy computation. In this paper, an improved Kohonen self-organizing neural network (KSONN) is used to boost the performance of existing KSONN. This improvement is brought by the introduction of optimization technique into the learning phase of the KSONN. The proposed method shows improved accuracy of the recognition. Moreover, it also reduces the iterations required to train the network. From the experimental results, it is observed that the proposed method achieves a maximum accuracy of 98% in 85 iterations
A Multi-modal Machine Learning Approach and Toolkit to Automate Recognition of Early Stages of Dementia among British Sign Language Users
The ageing population trend is correlated with an increased prevalence of acquired cognitive impairments such as dementia. Although there is no cure for dementia, a timely diagnosis helps in obtaining necessary support and appropriate medication. Researchers are working urgently to develop effective technological tools that can help doctors undertake early identification of cognitive disorder. In particular, screening for dementia in ageing Deaf signers of British Sign Language (BSL) poses additional challenges as the diagnostic process is bound up with conditions such as quality and availability of interpreters, as well as appropriate questionnaires and cognitive tests. On the other hand, deep learning based approaches for image and video analysis and understanding are promising, particularly the adoption of Convolutional Neural Network (CNN), which require large amounts of training data. In this paper, however, we demonstrate novelty in the following way: a) a multi-modal machine learning based automatic recognition toolkit for early stages of dementia among BSL users in that features from several parts of the body contributing to the sign envelope, e.g., hand-arm movements and facial expressions, are combined, b) universality in that it is possible to apply our technique to users of any sign language, since it is language independent, c) given the trade-off between complexity and accuracy of machine learning (ML) prediction models as well as the limited amount of training and testing data being available, we show that our approach is not over-fitted and has the potential to scale up
Machine Learning for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language
Real-time hand movement trajectory tracking based on machine learning approaches may assist the early identification of dementia in ageing deaf individuals who are users of British Sign Language (BSL), since there are few clinicians with appropriate communication skills, and a shortage of sign language interpreters. In this paper, we introduce an automatic dementia screening system for ageing Deaf signers of BSL, using a Convolutional Neural Network (CNN) to analyse the sign space envelope and facial expression of BSL signers recorded in normal 2D videos from the BSL corpus. Our approach involves the introduction of a sub-network (the multi-modal feature extractor) which includes an accurate real-time hand trajectory tracking model and a real-time landmark facial motion analysis model. The experiments show the effectiveness of our deep learning based approach in terms of sign space tracking, facial motion tracking and early stage dementia performance assessment tasks
- ā¦