7 research outputs found
Teaching computational reproducibility for neuroimaging
We describe a project-based introduction to reproducible and collaborative
neuroimaging analysis. Traditional teaching on neuroimaging usually consists of
a series of lectures that emphasize the big picture rather than the foundations
on which the techniques are based. The lectures are often paired with practical
workshops in which students run imaging analyses using the graphical interface
of specific neuroimaging software packages. Our experience suggests that this
combination leaves the student with a superficial understanding of the
underlying ideas, and an informal, inefficient, and inaccurate approach to
analysis. To address these problems, we based our course around a substantial
open-ended group project. This allowed us to teach: (a) computational tools to
ensure computationally reproducible work, such as the Unix command line,
structured code, version control, automated testing, and code review and (b) a
clear understanding of the statistical techniques used for a basic analysis of
a single run in an MRI scanner. The emphasis we put on the group project showed
the importance of standard computational tools for accuracy, efficiency, and
collaboration. The projects were broadly successful in engaging students in
working reproducibly on real scientific questions. We propose that a course on
this model should be the foundation for future programs in neuroimaging. We
believe it will also serve as a model for teaching efficient and reproducible
research in other fields of computational science
A Foundation Model for Cell Segmentation
Cells are the fundamental unit of biological organization, and identifying
them in imaging data - cell segmentation - is a critical task for various
cellular imaging experiments. While deep learning methods have led to
substantial progress on this problem, models that have seen wide use are
specialist models that work well for specific domains. Methods that have
learned the general notion of "what is a cell" and can identify them across
different domains of cellular imaging data have proven elusive. In this work,
we present CellSAM, a foundation model for cell segmentation that generalizes
across diverse cellular imaging data. CellSAM builds on top of the Segment
Anything Model (SAM) by developing a prompt engineering approach to mask
generation. We train an object detector, CellFinder, to automatically detect
cells and prompt SAM to generate segmentations. We show that this approach
allows a single model to achieve state-of-the-art performance for segmenting
images of mammalian cells (in tissues and cell culture), yeast, and bacteria
collected with various imaging modalities. To enable accessibility, we
integrate CellSAM into DeepCell Label to further accelerate human-in-the-loop
labeling strategies for cellular imaging data. A deployed version of CellSAM is
available at https://label-dev.deepcell.org/
Recommended from our members
Development and Evaluation of Real-Time Volumetric Compton Gamma-Ray Imaging
An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. The real-time tracking allows the imager to be moved throughout the environment or around a particular object of interest, obtaining the multiple perspectives necessary for standoff 3D imaging. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, can be incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and two different mobile gamma-ray imaging platforms. The first is a cart-based imaging platform known as the Volumetric Compton Imager (VCI), comprising two 3D position-sensitive high purity germanium (HPGe) detectors, exhibiting excellent gamma-ray imaging characteristics, but with limited mobility due to the size and weight of the cart. The second system is the High Efficiency Multimodal Imager (HEMI) a hand-portable gamma-ray imager comprising 96 individual cm CdZnTe crystals arranged in a two-plane, active-mask configuration. The HEMI instrument has poorer energy and angular resolution than the VCI, but is truly hand-portable, allowing the SDF concept to be tested in multiple environments and for more challenging imaging scenarios. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. Each of the two mobile imaging systems are used to demonstrate SDF for a variety of scenarios, including general search and mapping scenarios with several point gamma-ray sources over the range of energies relevant for Compton imaging. More specific imaging scenarios are also addressed, including directed search and object interogation scenarios. Finally, the volumetric image quality is quantitatively investigated with respect to the number of Compton events acquired during a measurement, the list-mode uncertainty of the Compton cone data, and the uncertainty in the pose estimate from the real-time tracking algorithm. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time
Recommended from our members
Advances in Nuclear Radiation Sensing: Enabling 3-D Gamma-Ray Vision.
The enormous advances in sensing and data processing technologies in combination with recent developments in nuclear radiation detection and imaging enable unprecedented and "smarter" ways to detect, map, and visualize nuclear radiation. The recently developed concept of three-dimensional (3-D) Scene-data fusion allows us now to "see" nuclear radiation in three dimensions, in real time, and specific to radionuclides. It is based on a multi-sensor instrument that is able to map a local scene and to fuse the scene data with nuclear radiation data in 3-D while the instrument is freely moving through the scene. This new concept is agnostic of the deployment platform and the specific radiation detection or imaging modality. We have demonstrated this 3-D Scene-data fusion concept in a range of configurations in locations, such as the Fukushima Prefecture in Japan or Chernobyl in Ukraine on unmanned and manned aerial and ground-based platforms. It provides new means in the detection, mapping, and visualization of radiological and nuclear materials relevant for the safe and secure operation of nuclear and radiological facilities or in the response to accidental or intentional releases of radioactive materials where a timely, accurate, and effective assessment is critical. In addition, the ability to visualize nuclear radiation in 3-D and in real time provides new means in the communication with public and facilitates to overcome one of the major public concerns of not being able to "see" nuclear radiation