16,301 research outputs found
MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging -- design, implementation and application on the example of DCE-MRI
Many medical imaging techniques utilize fitting approaches for quantitative
parameter estimation and analysis. Common examples are pharmacokinetic modeling
in DCE MRI/CT, ADC calculations and IVIM modeling in diffusion-weighted MRI and
Z-spectra analysis in chemical exchange saturation transfer MRI. Most available
software tools are limited to a special purpose and do not allow for own
developments and extensions. Furthermore, they are mostly designed as
stand-alone solutions using external frameworks and thus cannot be easily
incorporated natively in the analysis workflow. We present a framework for
medical image fitting tasks that is included in MITK, following a rigorous
open-source, well-integrated and operating system independent policy. Software
engineering-wise, the local models, the fitting infrastructure and the results
representation are abstracted and thus can be easily adapted to any model
fitting task on image data, independent of image modality or model. Several
ready-to-use libraries for model fitting and use-cases, including fit
evaluation and visualization, were implemented. Their embedding into MITK
allows for easy data loading, pre- and post-processing and thus a natural
inclusion of model fitting into an overarching workflow. As an example, we
present a comprehensive set of plug-ins for the analysis of DCE MRI data, which
we validated on existing and novel digital phantoms, yielding competitive
deviations between fit and ground truth. Providing a very flexible environment,
our software mainly addresses developers of medical imaging software that
includes model fitting algorithms and tools. Additionally, the framework is of
high interest to users in the domain of perfusion MRI, as it offers
feature-rich, freely available, validated tools to perform pharmacokinetic
analysis on DCE MRI data, with both interactive and automatized batch
processing workflows.Comment: 31 pages, 11 figures URL: http://mitk.org/wiki/MITK-ModelFi
Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu
Inviwo -- A Visualization System with Usage Abstraction Levels
The complexity of today's visualization applications demands specific
visualization systems tailored for the development of these applications.
Frequently, such systems utilize levels of abstraction to improve the
application development process, for instance by providing a data flow network
editor. Unfortunately, these abstractions result in several issues, which need
to be circumvented through an abstraction-centered system design. Often, a high
level of abstraction hides low level details, which makes it difficult to
directly access the underlying computing platform, which would be important to
achieve an optimal performance. Therefore, we propose a layer structure
developed for modern and sustainable visualization systems allowing developers
to interact with all contained abstraction levels. We refer to this interaction
capabilities as usage abstraction levels, since we target application
developers with various levels of experience. We formulate the requirements for
such a system, derive the desired architecture, and present how the concepts
have been exemplary realized within the Inviwo visualization system.
Furthermore, we address several specific challenges that arise during the
realization of such a layered architecture, such as communication between
different computing platforms, performance centered encapsulation, as well as
layer-independent development by supporting cross layer documentation and
debugging capabilities
A Development Environment for Visual Physics Analysis
The Visual Physics Analysis (VISPA) project integrates different aspects of
physics analyses into a graphical development environment. It addresses the
typical development cycle of (re-)designing, executing and verifying an
analysis. The project provides an extendable plug-in mechanism and includes
plug-ins for designing the analysis flow, for running the analysis on batch
systems, and for browsing the data content. The corresponding plug-ins are
based on an object-oriented toolkit for modular data analysis. We introduce the
main concepts of the project, describe the technical realization and
demonstrate the functionality in example applications
Grid simulation services for the medical community
The first part of this paper presents a selection of medical simulation applications, including image reconstruction, near real-time registration for neuro-surgery, enhanced dose distribution calculation for radio-therapy, inhaled drug delivery prediction, plastic surgery planning and cardio-vascular system simulation. The latter two topics are discussed in some detail. In the second part, we show how such services can be made available to the clinical practitioner using Grid technology. We discuss the developments and experience made during the EU project GEMSS, which provides reliable, efficient, secure and lawful medical Grid services
SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit
The web-based, Java-written SOCR (Statistical Online Computational Resource) tools have been utilized in many undergraduate and graduate level statistics courses for seven years now (Dinov 2006; Dinov et al. 2008b). It has been proven that these resources can successfully improve students' learning (Dinov et al. 2008b). Being first published online in 2005, SOCR Analyses is a somewhat new component and it concentrate on data modeling for both parametric and non-parametric data analyses with graphical model diagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learning for high school and undergraduate students. As we have already implemented SOCR Distributions and Experiments, SOCR Analyses and Charts fulfill the rest of a standard statistics curricula. Currently, there are four core components of SOCR Analyses. Linear models included in SOCR Analyses are simple linear regression, multiple linear regression, one-way and two-way ANOVA. Tests for sample comparisons include t-test in the parametric category. Some examples of SOCR Analyses' in the non-parametric category are Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirnoff test and Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman's test and Fisher's exact test. The last component of Analyses is a utility for computing sample sizes for normal distribution. In this article, we present the design framework, computational implementation and the utilization of SOCR Analyses.
- ā¦