2,757 research outputs found
Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines
Many automatically analyzable scientific questions are well-posed and offer a
variety of information about the expected outcome a priori. Although often
being neglected, this prior knowledge can be systematically exploited to make
automated analysis operations sensitive to a desired phenomenon or to evaluate
extracted content with respect to this prior knowledge. For instance, the
performance of processing operators can be greatly enhanced by a more focused
detection strategy and the direct information about the ambiguity inherent in
the extracted data. We present a new concept for the estimation and propagation
of uncertainty involved in image analysis operators. This allows using simple
processing operators that are suitable for analyzing large-scale 3D+t
microscopy images without compromising the result quality. On the foundation of
fuzzy set theory, we transform available prior knowledge into a mathematical
representation and extensively use it enhance the result quality of various
processing operators. All presented concepts are illustrated on a typical
bioimage analysis pipeline comprised of seed point detection, segmentation,
multiview fusion and tracking. Furthermore, the functionality of the proposed
approach is validated on a comprehensive simulated 3D+t benchmark data set that
mimics embryonic development and on large-scale light-sheet microscopy data of
a zebrafish embryo. The general concept introduced in this contribution
represents a new approach to efficiently exploit prior knowledge to improve the
result quality of image analysis pipelines. Especially, the automated analysis
of terabyte-scale microscopy data will benefit from sophisticated and efficient
algorithms that enable a quantitative and fast readout. The generality of the
concept, however, makes it also applicable to practically any other field with
processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure
New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty
Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images
New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty
Multidimensional imaging techniques provide powerful ways to examine various
kinds of scientific questions. The routinely produced datasets in the
terabyte-range, however, can hardly be analyzed manually and require an
extensive use of automated image analysis. The present thesis introduces a new
concept for the estimation and propagation of uncertainty involved in image
analysis operators and new segmentation algorithms that are suitable for
terabyte-scale analyses of 3D+t microscopy images.Comment: 218 pages, 58 figures, PhD thesis, Department of Mechanical
Engineering, Karlsruhe Institute of Technology, published online with KITopen
(License: CC BY-SA 3.0, http://dx.doi.org/10.5445/IR/1000057821
New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty
Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images
Utforsking av overgangen fra tradisjonell dataanalyse til metoder med maskin- og dyp læring
Data analysis methods based on machine- and deep learning approaches are
continuously replacing traditional methods. Models based on deep learning (DL)
are applicable to many problems and often have better prediction performance
compared to traditional methods. One major difference between the traditional
methods and machine learning (ML) approaches is the black box aspect often
associated with ML and DL models. The use of ML and DL models offers
many opportunities but also challenges. This thesis explores some of these
opportunities and challenges of DL modelling with a focus on applications in
spectroscopy.
DL models are based on artificial neural networks (ANNs) and are known to
automatically find complex relations in the data. In Paper I, this property is
exploited by designing DL models to learn spectroscopic preprocessing based on
classical preprocessing techniques. It is shown that the DL-based preprocessing
has some merits with regard to prediction performance, but there is considerable
extra effort required when training and tuning these DL models. The flexibility
of ANN architecture designs is further studied in Paper II when a DL model for
multiblock data analysis is proposed which can also quantify the importance of
each data block.
A drawback of the DL models is the lack of interpretability. To address this,
a different modelling approach is taken in Paper III where the focus is to use
DL models in such a way as to retain as much interpretability as possible. The
paper presents the concept of non-linear error modelling, where the DL model
is used to model the residuals of the linear model instead of the raw input
data. The concept is essentially a shrinking of the black box aspect since the
majority of the data modelling is done by a linear interpretable model.
The final topic explored in this thesis is a more traditional modelling approach
inspired by DL techniques. Data sometimes contain intrinsic subgroups which
might be more accurately modelled separately than with a global model. Paper
IV presents a modelling framework based on locally weighted models and
fuzzy partitioning that automatically finds relevant clusters and combines the predictions of each local model. Compared to a DL model, the locally weighted
modelling framework is more transparent. It is also shown how the framework
can utilise DL techniques to be scaled to problems with huge amounts of data.Metoder basert på maskin- og dyp læring erstatter i stadig økende grad tradisjonell
datamodellering. Modeller basert på dyp læring (DL) kan brukes på
mange problemer og har ofte bedre prediksjonsevne sammenlignet med tradisjonelle
metoder. En stor forskjell mellom tradisjonelle metoder og metoder
basert på maskinlæring (ML) er den "svarte boksen" som ofte forbindes med
ML- og DL-modeller. Bruken av ML- og DL-modeller åpner opp for mange
muligheter, men også utfordringer. Denne avhandlingen utforsker noen av disse
mulighetene og utfordringene med DL modeller, fokusert på anvendelser innen
spektroskopi.
DL-modeller er basert på kunstige nevrale nettverk (KNN) og er kjent for å
kunne finne komplekse relasjoner i data. I Artikkel I blir denne egenskapen
utnyttet ved å designe DL-modeller som kan lære spektroskopisk preprosessering
basert på klassiske preprosesseringsteknikker. Det er vist at DL-basert
preprosessering kan være gunstig med tanke på prediksjonsevne, men det kreves
større innsats for å trene og justere disse DL-modellene. Fleksibiliteten til
design av KNN-arkitekturer er studert videre i Artikkel II hvor en DL-modell
for analyse av multiblokkdata er foreslått, som også kan kvantifisere viktigheten
til hver datablokk.
En ulempe med DL-modeller er manglende muligheter for tolkning. For å
adressere dette, er en annen modelleringsframgangsmåte brukt i Artikkel III,
hvor fokuset er på å bruke DL-modeller på en måte som bevarer mest mulig
tolkbarhet. Artikkelen presenterer konseptet ikke-lineær feilmodellering, hvor
en DL-modell blir bruk til å modellere residualer fra en lineær modell i stedet
for rå inputdata. Konseptet kan ses på som en krymping av den svarte boksen,
siden mesteparten av datamodelleringen er gjort av en lineær, tolkbar modell.
Det siste temaet som er utforsket i denne avhandlingen er nærmere en tradisjonell
modelleringsvariant, men som er inspirert av DL-teknikker. Data har av
og til iboende undergrupper som kan bli mer nøyaktig modellert hver for seg
enn med en global modell. Artikkel IV presenterer et modelleringsrammeverk
basert på lokalt vektede modeller og "fuzzy" oppdeling, som automatisk finner relevante grupperinger ("clusters") og kombinerer prediksjonene fra hver lokale
modell. Sammenlignet med en DL-modell, er det lokalt vektede modelleringsrammeverket
mer transparent. Det er også vist hvordan rammeverket kan
utnytte teknikker fra DL for å skalere opp til problemer med store mengder
data
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Industrial Applications: New Solutions for the New Era
This book reprints articles from the Special Issue "Industrial Applications: New Solutions for the New Age" published online in the open-access journal Machines (ISSN 2075-1702). This book consists of twelve published articles. This special edition belongs to the "Mechatronic and Intelligent Machines" section
Functional and structural MRI image analysis for brain glial tumors treatment
Cotutela con il Dipartimento di Biotecnologie e Scienze della Vita, Universiità degli Studi dell'Insubria.openThis Ph.D Thesis is the outcome of a close collaboration between the Center for Research in Image Analysis and Medical Informatics (CRAIIM) of the Insubria University and the Operative Unit of Neurosurgery, Neuroradiology and Health Physics of the University Hospital ”Circolo Fondazione Macchi”, Varese.
The project aim is to investigate new methodologies by means of whose, develop an integrated framework able to enhance the use of Magnetic Resonance Images, in order to support clinical experts in the treatment of patients with brain Glial tumor.
Both the most common uses of MRI technology for non-invasive brain inspection were analyzed. From the Functional point of view, the goal has been to provide tools for an objective reliable and non-presumptive assessment of the brain’s areas locations, to preserve them as much as possible at surgery.
From the Structural point of view, methodologies for fully automatic brain segmentation and recognition of the tumoral areas, for evaluating the tumor volume, the spatial distribution and to be able to infer correlation with other clinical data or trace growth trend, have been studied. Each of the proposed methods has been thoroughly assessed both qualitatively and quantitatively.
All the Medical Imaging and Pattern Recognition algorithmic solutions studied for this Ph.D. Thesis have been integrated in GliCInE: Glioma Computerized Inspection Environment, which is a MATLAB prototype of an integrated analysis environment that offers, in addition to all the functionality specifically described in this Thesis, a set of tools needed to manage Functional and Structural Magnetic Resonance Volumes and ancillary data related to the acquisition and the patient.openInformaticaPedoia, ValentinaPedoia, Valentin
- …