835 research outputs found
Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations
Fully automated decoding of human activities and intentions from direct
neural recordings is a tantalizing challenge in brain-computer interfacing.
Most ongoing efforts have focused on training decoders on specific, stereotyped
tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in
natural settings requires adaptive strategies and scalable algorithms that
require minimal supervision. Here we propose an unsupervised approach to
decoding neural states from human brain recordings acquired in a naturalistic
context. We demonstrate our approach on continuous long-term
electrocorticographic (ECoG) data recorded over many days from the brain
surface of subjects in a hospital room, with simultaneous audio and video
recordings. We first discovered clusters in high-dimensional ECoG recordings
and then annotated coherent clusters using speech and movement labels extracted
automatically from audio and video recordings. To our knowledge, this
represents the first time techniques from computer vision and speech processing
have been used for natural ECoG decoding. Our results show that our
unsupervised approach can discover distinct behaviors from ECoG data, including
moving, speaking and resting. We verify the accuracy of our approach by
comparing to manual annotations. Projecting the discovered cluster centers back
onto the brain, this technique opens the door to automated functional brain
mapping in natural settings
The Relative Importance of Depth Cues and Semantic Edges for Indoor Mobility Using Simulated Prosthetic Vision in Immersive Virtual Reality
Visual neuroprostheses (bionic eyes) have the potential to treat degenerative
eye diseases that often result in low vision or complete blindness. These
devices rely on an external camera to capture the visual scene, which is then
translated frame-by-frame into an electrical stimulation pattern that is sent
to the implant in the eye. To highlight more meaningful information in the
scene, recent studies have tested the effectiveness of deep-learning based
computer vision techniques, such as depth estimation to highlight nearby
obstacles (DepthOnly mode) and semantic edge detection to outline important
objects in the scene (EdgesOnly mode). However, nobody has attempted to combine
the two, either by presenting them together (EdgesAndDepth) or by giving the
user the ability to flexibly switch between them (EdgesOrDepth). Here, we used
a neurobiologically inspired model of simulated prosthetic vision (SPV) in an
immersive virtual reality (VR) environment to test the relative importance of
semantic edges and relative depth cues to support the ability to avoid
obstacles and identify objects. We found that participants were significantly
better at avoiding obstacles using depth-based cues as opposed to relying on
edge information alone, and that roughly half the participants preferred the
flexibility to switch between modes (EdgesOrDepth). This study highlights the
relative importance of depth cues for SPV mobility and is an important first
step towards a visual neuroprosthesis that uses computer vision to improve a
user's scene understanding
3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
Teeth localization, segmentation, and labeling from intra-oral 3D scans are
essential tasks in modern dentistry to enhance dental diagnostics, treatment
planning, and population-based studies on oral health. However, developing
automated algorithms for teeth analysis presents significant challenges due to
variations in dental anatomy, imaging protocols, and limited availability of
publicly accessible data. To address these challenges, the 3DTeethSeg'22
challenge was organized in conjunction with the International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022,
with a call for algorithms tackling teeth localization, segmentation, and
labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans
from 900 patients was prepared, and each tooth was individually annotated by a
human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this
dataset. In this study, we present the evaluation results of the 3DTeethSeg'22
challenge. The 3DTeethSeg'22 challenge code can be accessed at:
https://github.com/abenhamadou/3DTeethSeg22_challengeComment: 29 pages, MICCAI 2022 Singapore, Satellite Event, Challeng
Unifying terrain awareness for the visually impaired through real-time semantic segmentation.
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework
Deep Interactive Learning: An Efficient Labeling Approach for Deep Learning-Based Osteosarcoma Treatment Response Assessment
Osteosarcoma is the most common malignant primary bone tumor. Standard
treatment includes pre-operative chemotherapy followed by surgical resection.
The response to treatment as measured by ratio of necrotic tumor area to
overall tumor area is a known prognostic factor for overall survival. This
assessment is currently done manually by pathologists by looking at glass
slides under the microscope which may not be reproducible due to its subjective
nature. Convolutional neural networks (CNNs) can be used for automated
segmentation of viable and necrotic tumor on osteosarcoma whole slide images.
One bottleneck for supervised learning is that large amounts of accurate
annotations are required for training which is a time-consuming and expensive
process. In this paper, we describe Deep Interactive Learning (DIaL) as an
efficient labeling approach for training CNNs. After an initial labeling step
is done, annotators only need to correct mislabeled regions from previous
segmentation predictions to improve the CNN model until the satisfactory
predictions are achieved. Our experiments show that our CNN model trained by
only 7 hours of annotation using DIaL can successfully estimate ratios of
necrosis within expected inter-observer variation rate for non-standardized
manual surgical pathology task.Comment: Accepted at MICCAI 202
Machine Learning Methods for Image Analysis in Medical Applications, from Alzheimer\u27s Disease, Brain Tumors, to Assisted Living
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer\u27s disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications
Landmine Victim or Landmine Survivor: What Is in a Name?
victims by some and landmine survivors by others. Their view of self, as well as the perspectives of their families, communities and that of aid agencies, toward the terms ‘victim’ or ‘survivor’, may significantly affect their recovery and their ability to reintegrate into their communities. We will present a summary of the literature addressing the victim/survivor continuum, as well as the different vantage points of using victim-versus-survivor terminology and the potential influence this language has in shaping injured individuals’ recovery./p\u3
Wright State University\u27s Symposium of Student Research, Scholarship & Creative Activities from Thursday, October 26, 2023
The student abstract booklet is a compilation of abstracts from students\u27 oral and poster presentations at Wright State University\u27s Symposium of Student Research, Scholarship & Creative Activities on October 26, 2023.https://corescholar.libraries.wright.edu/celebration_abstract_books/1001/thumbnail.jp
- …