171 research outputs found
Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory
This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
An orientation field approach to modelling fibre-generated spatial point processes
This thesis introduces a new approach to analysing spatial point data clustered along
or around a system of curves or fibres with additional background noise. Such data
arise in catalogues of galaxy locations, recorded locations of earthquakes, aerial
images of minefields, and pore patterns on fingerprints. Finding the underlying
curvilinear structure of these point-pattern data sets may not only facilitate a better
understanding of how they arise but also aid reconstruction of missing data.
We base the space of fibres on the set of integral lines of an orientation field. Using
an empirical Bayes approach, we estimate the field of orientations from anisotropic
features of the data. The orientation field estimation draws on ideas from tensor
field theory (an area recently motivated by the study of magnetic resonance imaging
scans), using symmetric positive-definite matrices to estimate local anisotropies in
the point pattern through the tensor method. We also propose a new measure of
anisotropy, the modified square Fractional Anisotropy, whose statistical properties
are estimated for tensors calculated via the tensor method.
A continuous-time Markov chain Monte Carlo algorithm is used to draw samples
from the posterior distribution of fibres, exploring models with different numbers
of clusters, and fitting fibres to the clusters as it proceeds. The Bayesian approach
permits inference on various properties of the clusters and associated fibres, and the
resulting algorithm performs well on a number of very different curvilinear structures
Data-Driven Sequence of Changes to Anatomical Brain Connectivity in Sporadic Alzheimer's Disease
Model-based investigations of transneuronal spreading mechanisms in neurodegenerative diseases relate the pattern of pathology severity to the brain’s connectivity matrix, which reveals information about how pathology propagates through the connectivity network. Such network models typically use networks based on functional or structural connectivity in young and healthy individuals, and only end-stage patterns of pathology, thereby ignoring/excluding the effects of normal aging and disease progression. Here, we examine the sequence of changes in the elderly brain’s anatomical connectivity over the course of a neurodegenerative disease. We do this in a data-driven manner that is not dependent upon clinical disease stage, by using event-based disease progression modeling. Using data from the Alzheimer’s Disease Neuroimaging Initiative dataset, we sequence the progressive decline of anatomical connectivity, as quantified by graph-theory metrics, in the Alzheimer’s disease brain. Ours is the first single model to contribute to understanding all three of the nature, the location, and the sequence of changes to anatomical connectivity in the human brain due to Alzheimer’s disease. Our experimental results reveal new insights into Alzheimer’s disease: that degeneration of anatomical connectivity in the brain may be a viable, even early, biomarker and should be considered when studying such neurodegenerative diseases
An orientation field approach to modelling fibre-generated spatial point processes
This thesis introduces a new approach to analysing spatial point data clustered along or around a system of curves or fibres with additional background noise. Such data arise in catalogues of galaxy locations, recorded locations of earthquakes, aerial images of minefields, and pore patterns on fingerprints. Finding the underlying curvilinear structure of these point-pattern data sets may not only facilitate a better understanding of how they arise but also aid reconstruction of missing data. We base the space of fibres on the set of integral lines of an orientation field. Using an empirical Bayes approach, we estimate the field of orientations from anisotropic features of the data. The orientation field estimation draws on ideas from tensor field theory (an area recently motivated by the study of magnetic resonance imaging scans), using symmetric positive-definite matrices to estimate local anisotropies in the point pattern through the tensor method. We also propose a new measure of anisotropy, the modified square Fractional Anisotropy, whose statistical properties are estimated for tensors calculated via the tensor method. A continuous-time Markov chain Monte Carlo algorithm is used to draw samples from the posterior distribution of fibres, exploring models with different numbers of clusters, and fitting fibres to the clusters as it proceeds. The Bayesian approach permits inference on various properties of the clusters and associated fibres, and the resulting algorithm performs well on a number of very different curvilinear structures.EThOS - Electronic Theses Online ServiceAarhus universitet. Matematisk institutGBUnited Kingdo
A survey on computational intelligence approaches for predictive modeling in prostate cancer
Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty and imprecision which is typically found in clinical and biological datasets. This paper provides a survey of recent work on computational intelligence approaches that have been applied to prostate cancer predictive modeling, and considers the challenges which need to be addressed. In particular, the paper considers a broad definition of computational intelligence which includes evolutionary algorithms (also known asmetaheuristic optimisation, nature inspired optimisation algorithms), Artificial Neural Networks, Deep Learning, Fuzzy based approaches, and hybrids of these,as well as Bayesian based approaches, and Markov models. Metaheuristic optimisation approaches, such as the Ant Colony Optimisation, Particle Swarm Optimisation, and Artificial Immune Network have been utilised for optimising the performance of prostate cancer predictive models, and the suitability of these approaches are discussed
An orientation field approach to modelling fibre-generated spatial point processes
This thesis introduces a new approach to analysing spatial point data clustered along or around a system of curves or fibres with additional background noise. Such data arise in catalogues of galaxy locations, recorded locations of earthquakes, aerial images of minefields, and pore patterns on fingerprints. Finding the underlying curvilinear structure of these point-pattern data sets may not only facilitate a better understanding of how they arise but also aid reconstruction of missing data. We base the space of fibres on the set of integral lines of an orientation field. Using an empirical Bayes approach, we estimate the field of orientations from anisotropic features of the data. The orientation field estimation draws on ideas from tensor field theory (an area recently motivated by the study of magnetic resonance imaging scans), using symmetric positive-definite matrices to estimate local anisotropies in the point pattern through the tensor method. We also propose a new measure of anisotropy, the modified square Fractional Anisotropy, whose statistical properties are estimated for tensors calculated via the tensor method. A continuous-time Markov chain Monte Carlo algorithm is used to draw samples from the posterior distribution of fibres, exploring models with different numbers of clusters, and fitting fibres to the clusters as it proceeds. The Bayesian approach permits inference on various properties of the clusters and associated fibres, and the resulting algorithm performs well on a number of very different curvilinear structures.EThOS - Electronic Theses Online ServiceAarhus universitet. Matematisk institutGBUnited Kingdo
Recommended from our members
9th Annual Jackson School of Geosciences Student Research Symposium, February 15, 2020
ConocoPhillipsGeological Science
Exploring offshore sediment evidence of the 1755 CE Tsunami (Faro, Portugal): implications for the study of outer shelf Tsunami deposits
Outer shelf sedimentary records are promising for determining the recurrence intervals of
tsunamis. However, compared to onshore deposits, offshore deposits are more difficult to access,
and so far, studies of outer shelf tsunami deposits are scarce. Here, an example of studying these
deposits is presented to infer implications for tsunami-related signatures in similar environments and
potentially contribute to pre-historic tsunami event detections. A multidisciplinary approach was
performed to detect the sedimentary imprints left by the 1755 CE tsunami in two cores, located in
the southern Portuguese continental shelf at water depths of 58 and 91 m. Age models based on
14C and 210Pbxs allowed a probable correspondence with the 1755 CE tsunami event. A multi-proxy
approach, including sand composition, grain-size, inorganic geochemistry, magnetic susceptibility,
and microtextural features on quartz grain surfaces, yielded evidence for a tsunami depositional
signature, although only a subtle terrestrial signal is present. A low contribution of terrestrial material
to outer shelf tsunami deposits calls for methodologies that reveal sedimentary structures linked to
tsunami event hydrodynamics. Finally, a change in general sedimentation after the tsunami event
might have influenced the signature of the 1755 CE tsunami in the outer shelf environment.FCT: UID/0350/2020/ UIDB/50019/2020/ SFRH/BD/147685/2019; EC (FP7), MOWER project “Rasgos Erosivos Y Depósitos Arenosos Generados Por La Mow Alrededor De Iberia: Implicaciones Paleoceanográficas, Sedimentarias Y Económicas”(CTM 2012—39599—C03).info:eu-repo/semantics/publishedVersio
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Volume measurement plays an important role in the production and processing of food products. Various methods have been
proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction
comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction
have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs
volume measurements using random points. Monte Carlo method only requires information regarding whether random points
fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a
computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with
heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images.
Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from
binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the
water displacement method. In addition, the proposed method is more accurate and faster than the space carving method
Pose-invariant, model-based object recognition, using linear combination of views and Bayesian statistics
This thesis presents an in-depth study on the problem of object recognition, and in particular the detection
of 3-D objects in 2-D intensity images which may be viewed from a variety of angles. A solution to this
problem remains elusive to this day, since it involves dealing with variations in geometry, photometry
and viewing angle, noise, occlusions and incomplete data. This work restricts its scope to a particular
kind of extrinsic variation; variation of the image due to changes in the viewpoint from which the object
is seen.
A technique is proposed and developed to address this problem, which falls into the category of
view-based approaches, that is, a method in which an object is represented as a collection of a small
number of 2-D views, as opposed to a generation of a full 3-D model. This technique is based on the
theoretical observation that the geometry of the set of possible images of an object undergoing 3-D rigid
transformations and scaling may, under most imaging conditions, be represented by a linear combination
of a small number of 2-D views of that object. It is therefore possible to synthesise a novel image of an
object given at least two existing and dissimilar views of the object, and a set of linear coefficients that
determine how these views are to be combined in order to synthesise the new image.
The method works in conjunction with a powerful optimization algorithm, to search and recover the
optimal linear combination coefficients that will synthesize a novel image, which is as similar as possible
to the target, scene view. If the similarity between the synthesized and the target images is above some
threshold, then an object is determined to be present in the scene and its location and pose are defined,
in part, by the coefficients. The key benefits of using this technique is that because it works directly
with pixel values, it avoids the need for problematic, low-level feature extraction and solution of the
correspondence problem. As a result, a linear combination of views (LCV) model is easy to construct
and use, since it only requires a small number of stored, 2-D views of the object in question, and the
selection of a few landmark points on the object, the process which is easily carried out during the offline,
model building stage. In addition, this method is general enough to be applied across a variety of
recognition problems and different types of objects.
The development and application of this method is initially explored looking at two-dimensional
problems, and then extending the same principles to 3-D. Additionally, the method is evaluated across
synthetic and real-image datasets, containing variations in the objects’ identity and pose. Future work on
possible extensions to incorporate a foreground/background model and lighting variations of the pixels
are examined
- …