14,338 research outputs found

    持続長ミスマッチ陰性電位による初回エピソード統合失調症の寛解予測

    Get PDF
    富山大学・富生命博甲第150号・中島 英・2023/03/23公表論文Nakajima S, Higuchi Y, Tateno T, Sasabayashi D, Mizukami Y, Nishiyama S, Takahashi T, Suzuki M. Duration Mismatch Negativity Predicts Remission in First-Episode Schizophrenia Patients. Front Psychiatry. 2021 Nov 25;12:777378. doi: 10.3389/fpsyt.2021.777378. PMID: 34899430; PMCID: PMC8656455.富山大

    An investigation of entorhinal spatial representations in self-localisation behaviours

    Get PDF
    Spatial-modulated cells of the medial entorhinal cortex (MEC) and neighbouring cortices are thought to provide the neural substrate for self-localisation behaviours. These cells include grid cells of the MEC which are thought to compute path integration operations to update self-location estimates. In order to read this grid code, downstream cells are thought to reconstruct a positional estimate as a simple rate-coded representation of space. Here, I show the coding scheme of grid cell and putative readout cells recorded from mice performing a virtual reality (VR) linear location task which engaged mice in both beaconing and path integration behaviours. I found grid cells can encode two unique coding schemes on the linear track, namely a position code which reflects periodic grid fields anchored to salient features of the track and a distance code which reflects periodic grid fields without this anchoring. Grid cells were found to switch between these coding schemes within sessions. When grid cells were encoding position, mice performed better at trials that required path integration but not on trials that required beaconing. This result provides the first mechanistic evidence linking grid cell activity to path integration-dependent behaviour. Putative readout cells were found in the form of ramp cells which fire proportionally as a function of location in defined regions of the linear track. This ramping activity was found to be primarily explained by track position rather than other kinematic variables like speed and acceleration. These representations were found to be maintained across both trial types and outcomes indicating they likely result from recall of the track structure. Together, these results support the functional importance of grid and ramp cells for self-localisation behaviours. Future investigations will look into the coherence between these two neural populations, which may together form a complete neural system for coding and decoding self-location in the brain

    Preterm pigs for preterm birth research: reasonably feasible

    Get PDF
    Preterm birth will disrupt the pattern and course of organ development, which may result in morbidity and mortality of newborn infants. Large animal models are crucial resources for developing novel, credible, and effective treatments for preterm infants. This review summarizes the classification, definition, and prevalence of preterm birth, and analyzes the relationship between the predicted animal days and one human year in the most widely used animal models (mice, rats, rabbits, sheep, and pigs) for preterm birth studies. After that, the physiological characteristics of preterm pig models at different gestational ages are described in more detail, including birth weight, body temperature, brain development, cardiovascular system development, respiratory, digestive, and immune system development, kidney development, and blood constituents. Studies on postnatal development and adaptation of preterm pig models of different gestational ages will help to determine the physiological basis for survival and development of very preterm, middle preterm, and late preterm newborns, and will also aid in the study and accurate optimization of feeding conditions, diet- or drug-related interventions for preterm neonates. Finally, this review summarizes several accepted pediatric applications of preterm pig models in nutritional fortification, necrotizing enterocolitis, neonatal encephalopathy and hypothermia intervention, mechanical ventilation, and oxygen therapy for preterm infants

    Protecting the Future: Neonatal Seizure Detection with Spatial-Temporal Modeling

    Full text link
    A timely detection of seizures for newborn infants with electroencephalogram (EEG) has been a common yet life-saving practice in the Neonatal Intensive Care Unit (NICU). However, it requires great human efforts for real-time monitoring, which calls for automated solutions to neonatal seizure detection. Moreover, the current automated methods focusing on adult epilepsy monitoring often fail due to (i) dynamic seizure onset location in human brains; (ii) different montages on neonates and (iii) huge distribution shift among different subjects. In this paper, we propose a deep learning framework, namely STATENet, to address the exclusive challenges with exquisite designs at the temporal, spatial and model levels. The experiments over the real-world large-scale neonatal EEG dataset illustrate that our framework achieves significantly better seizure detection performance.Comment: Accepted in IEEE International Conference on Systems, Man, and Cybernetics (SMC) 202

    Improving diagnostic procedures for epilepsy through automated recording and analysis of patients’ history

    Get PDF
    Transient loss of consciousness (TLOC) is a time-limited state of profound cognitive impairment characterised by amnesia, abnormal motor control, loss of responsiveness, a short duration and complete recovery. Most instances of TLOC are caused by one of three health conditions: epilepsy, functional (dissociative) seizures (FDS), or syncope. There is often a delay before the correct diagnosis is made and 10-20% of individuals initially receive an incorrect diagnosis. Clinical decision tools based on the endorsement of TLOC symptom lists have been limited to distinguishing between two causes of TLOC. The Initial Paroxysmal Event Profile (iPEP) has shown promise but was demonstrated to have greater accuracy in distinguishing between syncope and epilepsy or FDS than between epilepsy and FDS. The objective of this thesis was to investigate whether interactional, linguistic, and communicative differences in how people with epilepsy and people with FDS describe their experiences of TLOC can improve the predictive performance of the iPEP. An online web application was designed that collected information about TLOC symptoms and medical history from patients and witnesses using a binary questionnaire and verbal interaction with a virtual agent. We explored potential methods of automatically detecting these communicative differences, whether the differences were present during an interaction with a VA, to what extent these automatically detectable communicative differences improve the performance of the iPEP, and the acceptability of the application from the perspective of patients and witnesses. The two feature sets that were applied to previous doctor-patient interactions, features designed to measure formulation effort or detect semantic differences between the two groups, were able to predict the diagnosis with an accuracy of 71% and 81%, respectively. Individuals with epilepsy or FDS provided descriptions of TLOC to the VA that were qualitatively like those observed in previous research. Both feature sets were effective predictors of the diagnosis when applied to the web application recordings (85.7% and 85.7%). Overall, the accuracy of machine learning models trained for the threeway classification between epilepsy, FDS, and syncope using the iPEP responses from patients that were collected through the web application was worse than the performance observed in previous research (65.8% vs 78.3%), but the performance was increased by the inclusion of features extracted from the spoken descriptions on TLOC (85.5%). Finally, most participants who provided feedback reported that the online application was acceptable. These findings suggest that it is feasible to differentiate between people with epilepsy and people with FDS using an automated analysis of spoken seizure descriptions. Furthermore, incorporating these features into a clinical decision tool for TLOC can improve the predictive performance by improving the differential diagnosis between these two health conditions. Future research should use the feedback to improve the design of the application and increase perceived acceptability of the approach

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    DATA AUGMENTATION FOR SYNTHETIC APERTURE RADAR USING ALPHA BLENDING AND DEEP LAYER TRAINING

    Get PDF
    Human-based object detection in synthetic aperture RADAR (SAR) imagery is complex and technical, laboriously slow but time critical—the perfect application for machine learning (ML). Training an ML network for object detection requires very large image datasets with imbedded objects that are accurately and precisely labeled. Unfortunately, no such SAR datasets exist. Therefore, this paper proposes a method to synthesize wide field of view (FOV) SAR images by combining two existing datasets: SAMPLE, which is composed of both real and synthetic single-object chips, and MSTAR Clutter, which is composed of real wide-FOV SAR images. Synthetic objects are extracted from SAMPLE using threshold-based segmentation before being alpha-blended onto patches from MSTAR Clutter. To validate the novel synthesis method, individual object chips are created and classified using a simple convolutional neural network (CNN); testing is performed against the measured SAMPLE subset. A novel technique is also developed to investigate training activity in deep layers. The proposed data augmentation technique produces a 17% increase in the accuracy of measured SAR image classification. This improvement shows that any residual artifacts from segmentation and blending do not negatively affect ML, which is promising for future use in wide-area SAR synthesis.Outstanding ThesisMajor, United States Air ForceApproved for public release. Distribution is unlimited

    Exploring cognitive mechanisms involved in self-face recognition

    Get PDF
    Due to the own face being a significant stimulus that is critical to one’s identity, the own face is suggested to be processed in a quantitatively different (i.e., faster and better recognition) and qualitatively different (i.e., processed in a more featural manner) manner compared to other faces. This thesis further explored the cognitive mechanisms (perceptual and attentional systems) involved in the processing of the own face. Chapter 2 explored the role of holistic and featural processing involved in the processing of self-face (and other faces) with eye-tracking measures in a passive-viewing paradigm and a face identification task. In the passive-viewing paradigm, the own face was sampled in a more featural manner compared to other faces whereas when asked to identify faces, all faces were sampled in a more holistic manner. Chapter 3 further explored the role of holistic and featural processing in the identification of the own face using the three standard measures of holistic face processing: The face inversion task, the composite face task, and the part-whole task. Compared to other faces, individuals showed a smaller “holistic interference” by a task irrelevant bottom half for the own face in the composite face task and a stronger feature advantage for the own face, but inversion impaired the identification of all faces. These findings suggest that self-face is processed in a more featural manner, but the findings do not deny the role of holistic processing. The final experimental chapter, Chapter 4, explored the modulation effects of cultural differences in one’s self-concept (i.e., independent vs. interdependent self-concept) and a negative self-concept (i.e., depressive traits) on the attentional prioritization for the own face with a visual search paradigm. Findings showed that the attentional prioritization for the own face over an unfamiliar face is not modulated by cultural differences of one’s self-concept nor one’s level of depressive traits, and individuals showed no difference in the attentional prioritization for both the own face and friend’s face, demonstrating no processing advantage for the own face over a personally familiar face. These findings suggests that the attentional prioritization for the own face is better explained by a familiar face advantage. Altogether, the findings of this thesis suggest that the own face is processed qualitatively different compared to both personally familiar and unfamiliar face, with the own face being processed in a more featural manner. However, in terms of quantitative differences, the self-face is processed differently compared to an unfamiliar face, but not to a familiar face. Although the specific face processing strategies for the own face may be due to the distinct visual experience that one has with their face, the attentional prioritization of the own face is however, better explained by a familiar face advantage rather than a self-specificity effect

    Knowledge Distillation and Continual Learning for Optimized Deep Neural Networks

    Get PDF
    Over the past few years, deep learning (DL) has been achieving state-of-theart performance on various human tasks such as speech generation, language translation, image segmentation, and object detection. While traditional machine learning models require hand-crafted features, deep learning algorithms can automatically extract discriminative features and learn complex knowledge from large datasets. This powerful learning ability makes deep learning models attractive to both academia and big corporations. Despite their popularity, deep learning methods still have two main limitations: large memory consumption and catastrophic knowledge forgetting. First, DL algorithms use very deep neural networks (DNNs) with many billion parameters, which have a big model size and a slow inference speed. This restricts the application of DNNs in resource-constraint devices such as mobile phones and autonomous vehicles. Second, DNNs are known to suffer from catastrophic forgetting. When incrementally learning new tasks, the model performance on old tasks significantly drops. The ability to accommodate new knowledge while retaining previously learned knowledge is called continual learning. Since the realworld environments in which the model operates are always evolving, a robust neural network needs to have this continual learning ability for adapting to new changes
    corecore