8,551 research outputs found
Deep learning with convolutional neural networks for decoding and visualization of EEG pathology
We apply convolutional neural networks (ConvNets) to the task of
distinguishing pathological from normal EEG recordings in the Temple University
Hospital EEG Abnormal Corpus. We use two basic, shallow and deep ConvNet
architectures recently shown to decode task-related information from EEG at
least as well as established algorithms designed for this purpose. In decoding
EEG pathology, both ConvNets reached substantially better accuracies (about 6%
better, ~85% vs. ~79%) than the only published result for this dataset, and
were still better when using only 1 minute of each recording for training and
only six seconds of each recording for testing. We used automated methods to
optimize architectural hyperparameters and found intriguingly different ConvNet
architectures, e.g., with max pooling as the only nonlinearity. Visualizations
of the ConvNet decoding behavior showed that they used spectral power changes
in the delta (0-4 Hz) and theta (4-8 Hz) frequency range, possibly alongside
other features, consistent with expectations derived from spectral analysis of
the EEG data and from the textual medical reports. Analysis of the textual
medical reports also highlighted the potential for accuracy increases by
integrating contextual information, such as the age of subjects. In summary,
the ConvNets and visualization techniques used in this study constitute a next
step towards clinically useful automated EEG diagnosis and establish a new
baseline for future work on this topic.Comment: Published at IEEE SPMB 2017 https://www.ieeespmb.org/2017
Clear Visual Separation of Temporal Event Sequences
Extracting and visualizing informative insights from temporal event sequences
becomes increasingly difficult when data volume and variety increase. Besides
dealing with high event type cardinality and many distinct sequences, it can be
difficult to tell whether it is appropriate to combine multiple events into one
or utilize additional information about event attributes. Existing approaches
often make use of frequent sequential patterns extracted from the dataset,
however, these patterns are limited in terms of interpretability and utility.
In addition, it is difficult to assess the role of absolute and relative time
when using pattern mining techniques.
In this paper, we present methods that addresses these challenges by
automatically learning composite events which enables better aggregation of
multiple event sequences. By leveraging event sequence outcomes, we present
appropriate linked visualizations that allow domain experts to identify
critical flows, to assess validity and to understand the role of time.
Furthermore, we explore information gain and visual complexity metrics to
identify the most relevant visual patterns. We compare composite event learning
with two approaches for extracting event patterns using real world company
event data from an ongoing project with the Danish Business Authority.Comment: In Proceedings of the 3rd IEEE Symposium on Visualization in Data
Science (VDS), 201
You can't always sketch what you want: Understanding Sensemaking in Visual Query Systems
Visual query systems (VQSs) empower users to interactively search for line
charts with desired visual patterns, typically specified using intuitive
sketch-based interfaces. Despite decades of past work on VQSs, these efforts
have not translated to adoption in practice, possibly because VQSs are largely
evaluated in unrealistic lab-based settings. To remedy this gap in adoption, we
collaborated with experts from three diverse domains---astronomy, genetics, and
material science---via a year-long user-centered design process to develop a
VQS that supports their workflow and analytical needs, and evaluate how VQSs
can be used in practice. Our study results reveal that ad-hoc sketch-only
querying is not as commonly used as prior work suggests, since analysts are
often unable to precisely express their patterns of interest. In addition, we
characterize three essential sensemaking processes supported by our enhanced
VQS. We discover that participants employ all three processes, but in different
proportions, depending on the analytical needs in each domain. Our findings
suggest that all three sensemaking processes must be integrated in order to
make future VQSs useful for a wide range of analytical inquiries.Comment: Accepted for presentation at IEEE VAST 2019, to be held October 20-25
in Vancouver, Canada. Paper will also be published in a special issue of IEEE
Transactions on Visualization and Computer Graphics (TVCG) IEEE VIS
(InfoVis/VAST/SciVis) 2019 ACM 2012 CCS - Human-centered computing,
Visualization, Visualization design and evaluation method
Analysis framework for the interaction between lean construction and building information modelling
Building with Building Information Modelling (BIM) changes design and production processes. But can BIM be used to support process changes designed according to lean production and lean construction principles? To begin to answer this question we provide a conceptual analysis of the interaction of lean construction and BIM for improving construction. This was investigated by compiling a detailed listing of lean construction principles and BIM functionalities which are relevant from this perspective. These were drawn from a detailed literature survey. A research framework for analysis of the interaction between lean and BIM was then compiled. The goal of the framework is to both guide and stimulate research; as such, the
approach adopted up to this point is constructive. Ongoing research has identified 55 such interactions, the majority of which show positive synergy between the two
- …