6,848 research outputs found
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.
The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed
Performance Analysis of a Novel GPU Computation-to-core Mapping Scheme for Robust Facet Image Modeling
Though the GPGPU concept is well-known
in image processing, much more work remains to be done
to fully exploit GPUs as an alternative computation
engine. This paper investigates the computation-to-core
mapping strategies to probe the efficiency and scalability
of the robust facet image modeling algorithm on GPUs.
Our fine-grained computation-to-core mapping scheme
shows a significant performance gain over the standard
pixel-wise mapping scheme. With in-depth performance
comparisons across the two different mapping schemes,
we analyze the impact of the level of parallelism on
the GPU computation and suggest two principles for
optimizing future image processing applications on the
GPU platform
Supervised Classification: Quite a Brief Overview
The original problem of supervised classification considers the task of
automatically assigning objects to their respective classes on the basis of
numerical measurements derived from these objects. Classifiers are the tools
that implement the actual functional mapping from these measurements---also
called features or inputs---to the so-called class label---or output. The
fields of pattern recognition and machine learning study ways of constructing
such classifiers. The main idea behind supervised methods is that of learning
from examples: given a number of example input-output relations, to what extent
can the general mapping be learned that takes any new and unseen feature vector
to its correct class? This chapter provides a basic introduction to the
underlying ideas of how to come to a supervised classification problem. In
addition, it provides an overview of some specific classification techniques,
delves into the issues of object representation and classifier evaluation, and
(very) briefly covers some variations on the basic supervised classification
task that may also be of interest to the practitioner
New Techniques in Scene Understanding and Parallel Image Processing.
There has been tremendous research interest in the areas of computer and robotic vision. Scene understanding and parallel image processing are important paradigms in computer vision. New techniques are presented to solve some of the problems in these paradigms. Automatic interpretation of features in a natural scene is the focus of the first part of the dissertation. The proposed interpretation technique consists of a context dependent feature labeling algorithm using non linear probabilistic relaxation, and an expert system. Traditionally, the output of the labeling is analyzed, and then recognized by a high level interpreter. In this new approach, the knowledge about the scene is utilized to resolve the inconsistencies introduced by the labeling algorithm. A feature labeling system based on this hybrid technique is designed and developed. The labeling system plays a vital role in the development of an automatic image interpretation system for oceanographic satellite images. An extensive study on the existing interpretation techniques has been made in the related areas such as remote sensing, medical diagnosis, astronomy, and oceanography and has shown that our hybrid approach is unique and powerful. The second part of the dissertation presents the results in the area of parallel image processing. A new approach for parallelizing vision tasks in the low and intermediate levels is introduced. The technique utilizes schemes to embed the inherent data or computational structure, used to solve the problem, into parallel architectures such as hypercubes. The important characteristic of the technique is that the adjacent pixels in the image are mapped to nodes that are at a constant distance in the hypercube. Using the technique, parallel algorithms for neighbor-finding and digital distances are developed. A parallel hypercube sorting algorithm is obtained as an illustration of the technique. The research in developing these embedding algorithms has paved the way for efficient reconfiguration algorithms for hypercube architectures
Light-induced regulation of ligand-gated channel activity
The control of ligand-gated receptors with light using photochromic compounds has evolved from the first handcrafted examples to accurate, engineered receptors, whose development is supported by rational design, high-resolution protein structures, comparative pharmacology and molecular biology manipulations. Photoswitchable regulators have been designed and characterized for a large number of ligand-gated receptors in the mammalian nervous system, including nicotinic acetylcholine, glutamate and GABA receptors. They provide a well-equipped toolbox to investigate synaptic and neuronal circuits in all-optical experiments. This focused review discusses the design and properties of these photoswitches, their applications and shortcomings and future perspectives in the field
Nonlocal Graph-PDEs and Riemannian Gradient Flows for Image Labeling
In this thesis, we focus on the image labeling problem which is the task of performing unique
pixel-wise label decisions to simplify the image while reducing its redundant information. We
build upon a recently introduced geometric approach for data labeling by assignment flows
[
APSS17
] that comprises a smooth dynamical system for data processing on weighted graphs.
Hereby we pursue two lines of research that give new application and theoretically-oriented
insights on the underlying segmentation task.
We demonstrate using the example of Optical Coherence Tomography (OCT), which is the
mostly used non-invasive acquisition method of large volumetric scans of human retinal tis-
sues, how incorporation of constraints on the geometry of statistical manifold results in a novel
purely data driven
geometric
approach for order-constrained segmentation of volumetric data
in any metric space. In particular, making diagnostic analysis for human eye diseases requires
decisive information in form of exact measurement of retinal layer thicknesses that has be done
for each patient separately resulting in an demanding and time consuming task. To ease the
clinical diagnosis we will introduce a fully automated segmentation algorithm that comes up
with a high segmentation accuracy and a high level of built-in-parallelism. As opposed to many
established retinal layer segmentation methods, we use only local information as input without
incorporation of additional global shape priors. Instead, we achieve physiological order of reti-
nal cell layers and membranes including a new formulation of ordered pair of distributions in an
smoothed energy term. This systematically avoids bias pertaining to global shape and is hence
suited for the detection of anatomical changes of retinal tissue structure. To access the perfor-
mance of our approach we compare two different choices of features on a data set of manually
annotated
3
D OCT volumes of healthy human retina and evaluate our method against state of
the art in automatic retinal layer segmentation as well as to manually annotated ground truth
data using different metrics.
We generalize the recent work [
SS21
] on a variational perspective on assignment flows and
introduce a novel nonlocal partial difference equation (G-PDE) for labeling metric data on graphs.
The G-PDE is derived as nonlocal reparametrization of the assignment flow approach that was
introduced in
J. Math. Imaging & Vision
58(2), 2017. Due to this parameterization, solving the
G-PDE numerically is shown to be equivalent to computing the Riemannian gradient flow with re-
spect to a nonconvex potential. We devise an entropy-regularized difference-of-convex-functions
(DC) decomposition of this potential and show that the basic geometric Euler scheme for inte-
grating the assignment flow is equivalent to solving the G-PDE by an established DC program-
ming scheme. Moreover, the viewpoint of geometric integration reveals a basic way to exploit
higher-order information of the vector field that drives the assignment flow, in order to devise a
novel accelerated DC programming scheme. A detailed convergence analysis of both numerical
schemes is provided and illustrated by numerical experiments
- ā¦