58,101 research outputs found
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Sigma web interface for reactor data applications
We present Sigma Web interface which provides user-friendly access for online
analysis and plotting of the evaluated and experimental nuclear reaction data
stored in the ENDF-6 and EXFOR formats. The interface includes advanced
browsing and search capabilities, interactive plots of cross sections, angular
distributions and spectra, nubars, comparisons between evaluated and
experimental data, computations for cross section data sets, pre-calculated
integral quantities, neutron cross section uncertainties plots and
visualization of covariance matrices. Sigma is publicly available at the
National Nuclear Data Center website at http://www.nndc.bnl.gov/sigma
Teaching Data Science
We describe an introductory data science course, entitled Introduction to
Data Science, offered at the University of Illinois at Urbana-Champaign. The
course introduced general programming concepts by using the Python programming
language with an emphasis on data preparation, processing, and presentation.
The course had no prerequisites, and students were not expected to have any
programming experience. This introductory course was designed to cover a wide
range of topics, from the nature of data, to storage, to visualization, to
probability and statistical analysis, to cloud and high performance computing,
without becoming overly focused on any one subject. We conclude this article
with a discussion of lessons learned and our plans to develop new data science
courses.Comment: 10 pages, 4 figures, International Conference on Computational
Science (ICCS 2016
The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images
The Montage Image Mosaic Engine was designed as a scalable toolkit, written
in C for performance and portability across *nix platforms, that assembles FITS
images into mosaics. The code is freely available and has been widely used in
the astronomy and IT communities for research, product generation and for
developing next-generation cyber-infrastructure. Recently, it has begun to
finding applicability in the field of visualization. This has come about
because the toolkit design allows easy integration into scalable systems that
process data for subsequent visualization in a browser or client. And it
includes a visualization tool suitable for automation and for integration into
Python: mViewer creates, with a single command, complex multi-color images
overlaid with coordinate displays, labels, and observation footprints, and
includes an adaptive image histogram equalization method that preserves the
structure of a stretched image over its dynamic range. The Montage toolkit
contains functionality originally developed to support the creation and
management of mosaics but which also offers value to visualization: a
background rectification algorithm that reveals the faint structure in an
image; and tools for creating cutout and down-sampled versions of large images.
Version 5 of Montage offers support for visualizing data written in HEALPix
sky-tessellation scheme, and functionality for processing and organizing images
to comply with the TOAST sky-tessellation scheme required for consumption by
the World Wide Telescope (WWT). Four online tutorials enable readers to
reproduce and extend all the visualizations presented in this paper.Comment: 16 pages, 9 figures; accepted for publication in the PASP Special
Focus Issue: Techniques and Methods for Astrophysical Data Visualizatio
ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization
ROOT is an object-oriented C++ framework conceived in the high-energy physics
(HEP) community, designed for storing and analyzing petabytes of data in an
efficient way. Any instance of a C++ class can be stored into a ROOT file in a
machine-independent compressed binary format. In ROOT the TTree object
container is optimized for statistical data analysis over very large data sets
by using vertical data storage techniques. These containers can span a large
number of files on local disks, the web, or a number of different shared file
systems. In order to analyze this data, the user can chose out of a wide set of
mathematical and statistical functions, including linear algebra classes,
numerical algorithms such as integration and minimization, and various methods
for performing regression analysis (fitting). In particular, ROOT offers
packages for complex data modeling and fitting, as well as multivariate
classification based on machine learning techniques. A central piece in these
analysis tools are the histogram classes which provide binning of one- and
multi-dimensional data. Results can be saved in high-quality graphical formats
like Postscript and PDF or in bitmap formats like JPG or GIF. The result can
also be stored into ROOT macros that allow a full recreation and rework of the
graphics. Users typically create their analysis macros step by step, making use
of the interactive C++ interpreter CINT, while running over small data samples.
Once the development is finished, they can run these macros at full compiled
speed over large data sets, using on-the-fly compilation, or by creating a
stand-alone batch program. Finally, if processing farms are available, the user
can reduce the execution time of intrinsically parallel tasks - e.g. data
mining in HEP - by using PROOF, which will take care of optimally distributing
the work over the available resources in a transparent way
The space physics environment data analysis system (SPEDAS)
With the advent of the Heliophysics/Geospace System Observatory (H/GSO), a complement of multi-spacecraft missions and ground-based observatories to study the space environment, data retrieval, analysis, and visualization of space physics data can be daunting. The Space Physics Environment Data Analysis System (SPEDAS), a grass-roots software development platform (www.spedas.org), is now officially supported by NASA Heliophysics as part of its data environment infrastructure. It serves more than a dozen space missions and ground observatories and can integrate the full complement of past and upcoming space physics missions with minimal resources, following clear, simple, and well-proven guidelines. Free, modular and configurable to the needs of individual missions, it works in both command-line (ideal for experienced users) and Graphical User Interface (GUI) mode (reducing the learning curve for first-time users). Both options have “crib-sheets,” user-command sequences in ASCII format that can facilitate record-and-repeat actions, especially for complex operations and plotting. Crib-sheets enhance scientific interactions, as users can move rapidly and accurately from exchanges of technical information on data processing to efficient discussions regarding data interpretation and science. SPEDAS can readily query and ingest all International Solar Terrestrial Physics (ISTP)-compatible products from the Space Physics Data Facility (SPDF), enabling access to a vast collection of historic and current mission data. The planned incorporation of Heliophysics Application Programmer’s Interface (HAPI) standards will facilitate data ingestion from distributed datasets that adhere to these standards. Although SPEDAS is currently Interactive Data Language (IDL)-based (and interfaces to Java-based tools such as Autoplot), efforts are under-way to expand it further to work with python (first as an interface tool and potentially even receiving an under-the-hood replacement). We review the SPEDAS development history, goals, and current implementation. We explain its “modes of use” with examples geared for users and outline its technical implementation and requirements with software developers in mind. We also describe SPEDAS personnel and software management, interfaces with other organizations, resources and support structure available to the community, and future development plans.Published versio
Recommended from our members
vizLib: Using The Seven Stages of Visualization to Explore Population Trends and Processes in Local Authority Research
- …