18,678 research outputs found
Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science
(abridged for arXiv) With the first direct detection of gravitational waves,
the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) has
initiated a new field of astronomy by providing an alternate means of sensing
the universe. The extreme sensitivity required to make such detections is
achieved through exquisite isolation of all sensitive components of LIGO from
non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to
a variety of instrumental and environmental sources of noise that contaminate
the data. Of particular concern are noise features known as glitches, which are
transient and non-Gaussian in their nature, and occur at a high enough rate so
that accidental coincidence between the two LIGO detectors is non-negligible.
In this paper we describe an innovative project that combines crowdsourcing
with machine learning to aid in the challenging task of categorizing all of the
glitches recorded by the LIGO detectors. Through the Zooniverse platform, we
engage and recruit volunteers from the public to categorize images of glitches
into pre-identified morphological classes and to discover new classes that
appear as the detectors evolve. In addition, machine learning algorithms are
used to categorize images after being trained on human-classified examples of
the morphological classes. Leveraging the strengths of both classification
methods, we create a combined method with the aim of improving the efficiency
and accuracy of each individual classifier. The resulting classification and
characterization should help LIGO scientists to identify causes of glitches and
subsequently eliminate them from the data or the detector entirely, thereby
improving the rate and accuracy of gravitational-wave observations. We
demonstrate these methods using a small subset of data from LIGO's first
observing run.Comment: 27 pages, 8 figures, 1 tabl
What May Visualization Processes Optimize?
In this paper, we present an abstract model of visualization and inference
processes and describe an information-theoretic measure for optimizing such
processes. In order to obtain such an abstraction, we first examined six
classes of workflows in data analysis and visualization, and identified four
levels of typical visualization components, namely disseminative,
observational, analytical and model-developmental visualization. We noticed a
common phenomenon at different levels of visualization, that is, the
transformation of data spaces (referred to as alphabets) usually corresponds to
the reduction of maximal entropy along a workflow. Based on this observation,
we establish an information-theoretic measure of cost-benefit ratio that may be
used as a cost function for optimizing a data visualization process. To
demonstrate the validity of this measure, we examined a number of successful
visualization processes in the literature, and showed that the
information-theoretic measure can mathematically explain the advantages of such
processes over possible alternatives.Comment: 10 page
- …