17,104 research outputs found
Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images
Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images
of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumorinfiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL
maps are derived through computational staining using a convolutional neural network trained to
classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and
correlation with overall survival. TIL map structural patterns were grouped using standard
histopathological parameters. These patterns are enriched in particular T cell subpopulations
derived from molecular measures. TIL densities and spatial structure were differentially enriched
among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial
infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic
patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for
the TCGA image archives with insights into the tumor-immune microenvironment
Unsupervised Motion Artifact Detection in Wrist-Measured Electrodermal Activity Data
One of the main benefits of a wrist-worn computer is its ability to collect a
variety of physiological data in a minimally intrusive manner. Among these
data, electrodermal activity (EDA) is readily collected and provides a window
into a person's emotional and sympathetic responses. EDA data collected using a
wearable wristband are easily influenced by motion artifacts (MAs) that may
significantly distort the data and degrade the quality of analyses performed on
the data if not identified and removed. Prior work has demonstrated that MAs
can be successfully detected using supervised machine learning algorithms on a
small data set collected in a lab setting. In this paper, we demonstrate that
unsupervised learning algorithms perform competitively with supervised
algorithms for detecting MAs on EDA data collected in both a lab-based setting
and a real-world setting comprising about 23 hours of data. We also find,
somewhat surprisingly, that incorporating accelerometer data as well as EDA
improves detection accuracy only slightly for supervised algorithms and
significantly degrades the accuracy of unsupervised algorithms.Comment: To appear at International Symposium on Wearable Computers (ISWC)
201
A Neural Network Method for Efficient Vegetation Mapping
This paper describes the application of a neural network method designed to improve the efficiency of map production from remote sensing data. Specifically, the ARTMAP neural network produces vegetation maps of the Sierra National Forest, in Northern California, using Landsat Thematic Mapper (TM) data. In addition to spectral values, the data set includes terrain and location information for each pixel. The maps produced by ARTMAP are of comparable accuracy to maps produced by a currently used method, which requires expert knowledge of the area as well as extensive manual editing. In fact, once field observations of vegetation classes had been collected for selected sites, ARTMAP took only a few hours to accomplish a mapping task that had previously taken many months. The ARTMAP network features fast on-line learning, so the system can be updated incrementally when new field observations arrive, without the need for retraining on the entire data set. In addition to maps that identify lifeform and Calveg species, ARTMAP produces confidence maps, which indicate where errors are most likely to occur and which can, therefore, be used to guide map editing
Ambient Sound Provides Supervision for Visual Learning
The sound of crashing waves, the roar of fast-moving cars -- sound conveys
important information about the objects in our surroundings. In this work, we
show that ambient sounds can be used as a supervisory signal for learning
visual models. To demonstrate this, we train a convolutional neural network to
predict a statistical summary of the sound associated with a video frame. We
show that, through this process, the network learns a representation that
conveys information about objects and scenes. We evaluate this representation
on several recognition tasks, finding that its performance is comparable to
that of other state-of-the-art unsupervised learning methods. Finally, we show
through visualizations that the network learns units that are selective to
objects that are often associated with characteristic sounds.Comment: ECCV 201
- …