100 research outputs found

    Robust computational intelligence techniques for visual information processing

    Get PDF
    The third part is exclusively dedicated to the super-resolution of Magnetic Resonance Images. In one of these works, an algorithm based on the random shifting technique is developed. Besides, we studied noise removal and resolution enhancement simultaneously. To end, the cost function of deep networks has been modified by different combinations of norms in order to improve their training. Finally, the general conclusions of the research are presented and discussed, as well as the possible future research lines that are able to make use of the results obtained in this Ph.D. thesis.This Ph.D. thesis is about image processing by computational intelligence techniques. Firstly, a general overview of this book is carried out, where the motivation, the hypothesis, the objectives, and the methodology employed are described. The use and analysis of different mathematical norms will be our goal. After that, state of the art focused on the applications of the image processing proposals is presented. In addition, the fundamentals of the image modalities, with particular attention to magnetic resonance, and the learning techniques used in this research, mainly based on neural networks, are summarized. To end up, the mathematical framework on which this work is based on, â‚š-norms, is defined. Three different parts associated with image processing techniques follow. The first non-introductory part of this book collects the developments which are about image segmentation. Two of them are applications for video surveillance tasks and try to model the background of a scenario using a specific camera. The other work is centered on the medical field, where the goal of segmenting diabetic wounds of a very heterogeneous dataset is addressed. The second part is focused on the optimization and implementation of new models for curve and surface fitting in two and three dimensions, respectively. The first work presents a parabola fitting algorithm based on the measurement of the distances of the interior and exterior points to the focus and the directrix. The second work changes to an ellipse shape, and it ensembles the information of multiple fitting methods. Last, the ellipsoid problem is addressed in a similar way to the parabola

    Calculating Sparse and Dense Correspondences for Near-Isometric Shapes

    Get PDF
    Comparing and analysing digital models are basic techniques of geometric shape processing. These techniques have a variety of applications, such as extracting the domain knowledge contained in the growing number of digital models to simplify shape modelling. Another example application is the analysis of real-world objects, which itself has a variety of applications, such as medical examinations, medical and agricultural research, and infrastructure maintenance. As methods to digitalize physical objects mature, any advances in the analysis of digital shapes lead to progress in the analysis of real-world objects. Global shape properties, like volume and surface area, are simple to compare but contain only very limited information. Much more information is contained in local shape differences, such as where and how a plant grew. Sadly the computation of local shape differences is hard as it requires knowledge of corresponding point pairs, i.e. points on both shapes that correspond to each other. The following article thesis (cumulative dissertation) discusses several recent publications for the computation of corresponding points: - Geodesic distances between points, i.e. distances along the surface, are fundamental for several shape processing tasks as well as several shape matching techniques. Chapter 3 introduces and analyses fast and accurate bounds on geodesic distances. - When building a shape space on a set of shapes, misaligned correspondences lead to points moving along the surfaces and finally to a larger shape space. Chapter 4 shows that this also works the other way around, that is good correspondences are obtain by optimizing them to generate a compact shape space. - Representing correspondences with a “functional map” has a variety of advantages. Chapter 5 shows that representing the correspondence map as an alignment of Green’s functions of the Laplace operator has similar advantages, but is much less dependent on the number of eigenvectors used for the computations. - Quadratic assignment problems were recently shown to reliably yield sparse correspondences. Chapter 6 compares state-of-the-art convex relaxations of graphics and vision with methods from discrete optimization on typical quadratic assignment problems emerging in shape matching

    Source localization and denoising: a perspective from the TDOA space

    Full text link
    In this manuscript, we formulate the problem of denoising Time Differences of Arrival (TDOAs) in the TDOA space, i.e. the Euclidean space spanned by TDOA measurements. The method consists of pre-processing the TDOAs with the purpose of reducing the measurement noise. The complete set of TDOAs (i.e., TDOAs computed at all microphone pairs) is known to form a redundant set, which lies on a linear subspace in the TDOA space. Noise, however, prevents TDOAs from lying exactly on this subspace. We therefore show that TDOA denoising can be seen as a projection operation that suppresses the component of the noise that is orthogonal to that linear subspace. We then generalize the projection operator also to the cases where the set of TDOAs is incomplete. We analytically show that this operator improves the localization accuracy, and we further confirm that via simulation.Comment: 25 pages, 9 figure

    Robust Wide-Baseline Stereo Matching for Sparsely Textured Scenes

    Get PDF
    The task of wide baseline stereo matching algorithms is to identify corresponding elements in pairs of overlapping images taken from significantly different viewpoints. Such algorithms are a key ingredient to many computer vision applications, including object recognition, automatic camera orientation, 3D reconstruction and image registration. Although today's methods for wide baseline stereo matching produce reliable results for typical application scenarios, they assume properties of the image data that are not always granted, for example a significant amount of distinctive surface texture. For such problems, highly advanced algorithms have been proposed, which are often very problem specific, difficult to implement and hard to transfer to new matching problems. The motivation for our work comes from the belief that we can find a generic formulation for robust wide baseline image matching that is able to solve difficult matching problems and at the same time applicable to a variety of applications. It should be easy to implement, and have good semantic interpretability. Therefore our key contribution is the development of a generic statistical model for wide baseline stereo matching, which seamlessly integrates different types of image features, similarity measures and spatial feature relationships as information cues. It unifies the ideas of existing approaches into a Bayesian formulation, which has a clear statistical interpretation as the MAP estimate of a binary classification problem. The model ultimately takes the form of a global minimization problem that can be solved with standard optimization techniques. The particular type of features, measures, and spatial relationships however is not prescribed. A major advantage of our model over existing approaches is its ability to compensate weaknesses in one information cue implicitly by exploiting the strength of others. In our experiments we concentrate on images of sparsely textured scenes as a specifically difficult matching problem. Here the amount of stable image features is typically rather small, and the distinctiveness of feature descriptions often low. We use the proposed framework to implement a wide baseline stereo matching algorithm that can deal better with poor texture than established methods. For demonstrating the practical relevance, we also apply this algorithm to a system for automatic image orientation. Here, the task is to reconstruct the relative 3D positions and orientations of the cameras corresponding to a set of overlapping images. We show that our implementation leads to more successful results in case of sparsely textured scenes, while still retaining state of the art performance on standard datasets.Robuste Merkmalszuordnung für Bildpaare schwach texturierter Szenen mit deutlicher Stereobasis Die Aufgabe von Wide Baseline Stereo Matching Algorithmen besteht darin, korrespondierende Elemente in Paaren überlappender Bilder mit deutlich verschiedenen Kamerapositionen zu bestimmen. Solche Algorithmen sind ein grundlegender Baustein für zahlreiche Computer Vision Anwendungen wie Objekterkennung, automatische Kameraorientierung, 3D Rekonstruktion und Bildregistrierung. Die heute etablierten Verfahren für Wide Baseline Stereo Matching funktionieren in typischen Anwendungsszenarien sehr zuverlässig. Sie setzen jedoch Eigenschaften der Bilddaten voraus, die nicht immer gegeben sind, wie beispielsweise einen hohen Anteil markanter Textur. Für solche Fälle wurden sehr komplexe Verfahren entwickelt, die jedoch oft nur auf sehr spezifische Probleme anwendbar sind, einen hohen Implementierungsaufwand erfordern, und sich zudem nur schwer auf neue Matchingprobleme übertragen lassen. Die Motivation für diese Arbeit entstand aus der Überzeugung, dass es eine möglichst allgemein anwendbare Formulierung für robustes Wide Baseline Stereo Matching geben muß, die sich zur Lösung schwieriger Zuordnungsprobleme eignet und dennoch leicht auf verschiedenartige Anwendungen angepasst werden kann. Sie sollte leicht implementierbar sein und eine hohe semantische Interpretierbarkeit aufweisen. Unser Hauptbeitrag besteht daher in der Entwicklung eines allgemeinen statistischen Modells für Wide Baseline Stereo Matching, das verschiedene Typen von Bildmerkmalen, Ähnlichkeitsmaßen und räumlichen Beziehungen nahtlos als Informationsquellen integriert. Es führt Ideen bestehender Lösungsansätze in einer Bayes'schen Formulierung zusammen, die eine klare Interpretation als MAP Schätzung eines binären Klassifikationsproblems hat. Das Modell nimmt letztlich die Form eines globalen Minimierungsproblems an, das mit herkömmlichen Optimierungsverfahren gelöst werden kann. Der konkrete Typ der verwendeten Bildmerkmale, Ähnlichkeitsmaße und räumlichen Beziehungen ist nicht explizit vorgeschrieben. Ein wichtiger Vorteil unseres Modells gegenüber vergleichbaren Verfahren ist seine Fähigkeit, Schwachpunkte einer Informationsquelle implizit durch die Stärken anderer Informationsquellen zu kompensieren. In unseren Experimenten konzentrieren wir uns insbesondere auf Bilder schwach texturierter Szenen als ein Beispiel schwieriger Zuordnungsprobleme. Die Anzahl stabiler Bildmerkmale ist hier typischerweise gering, und die Unterscheidbarkeit der Merkmalsbeschreibungen schlecht. Anhand des vorgeschlagenen Modells implementieren wir einen konkreten Wide Baseline Stereo Matching Algorithmus, der besser mit schwacher Textur umgehen kann als herkömmliche Verfahren. Um die praktische Relevanz zu verdeutlichen, wenden wir den Algorithmus für die automatische Bildorientierung an. Hier besteht die Aufgabe darin, zu einer Menge überlappender Bilder die relativen 3D Kamerapositionen und Kameraorientierungen zu bestimmen. Wir zeigen, dass der Algorithmus im Fall schwach texturierter Szenen bessere Ergebnisse als etablierte Verfahren ermöglicht, und dennoch bei Standard-Datensätzen vergleichbare Ergebnisse liefert

    Novel solutions to classical signal processing problems in optimization framework

    Get PDF
    Cataloged from PDF version of article.Novel approaches for three classical signal processing problems in optimization framework are proposed to provide further flexibility and performance improvement. In the first part, a new technique, which uses Hermite-Gaussian (HG) functions, is developed for analysis of signals, whose components have non-overlapping compact time-frequency supports. Once the support of each signal component is properly transformed, HG functions provide optimal representations. Conducted experiments show that proposed method provides reliable identification and extraction of signal components even under severe noise cases. In the second part, three different approaches are proposed for designing a set of orthogonal pulse shapes for ultra-wideband communication systems with wideband antennas. Each pulse shape is modelled as a linear combination of time shifted and scaled HG functions. By solving the constructed optimization problems, high energy pulse shapes, which maintain orthogonality at the receiver with desired timefrequency characteristics are obtained. Moreover, by showing that, derivatives of HG functions can be represented as a linear combination of HGs, a simple optimal correlating receiver structure is proposed. In the third part, two different methods for phase-only control of array antennas based on semidefinite modelling are proposed. First, antenna pattern design problem is formulated as a non-convex quadratically constraint quadratic problem (QCQP). Then, by relaxing the QCQP formulation, a convex semidefinite problem (SDP) is obtained. For moderate size arrays, a novel iterative rank refinement algorithm is proposed to achieve a rank-1 solution for the obtained SDP, which is the solution to the original QCQP formulation. For large arrays an alternating direction method of multipliers (ADMM) based solution is developed. Conducted experiments show that both methods provide effective phase settings, which generate beam patterns under highly flexible constraints.Alp, YaĹźar KemalPh.D

    Adaptable optimization : theory and algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 189-200).Optimization under uncertainty is a central ingredient for analyzing and designing systems with incomplete information. This thesis addresses uncertainty in optimization, in a dynamic framework where information is revealed sequentially, and future decisions are adaptable, i.e., they depend functionally on the information revealed in the past. Such problems arise in applications where actions are repeated over a time horizon (e.g., portfolio management, or dynamic scheduling problems), or that have multiple planning stages (e.g., network design). The first part of the thesis focuses on the robust optimization approach to systems with uncertainty. Unlike the probability-driven stochastic programming approach, robust optimization is built on deterministic set-based formulations of uncertainty. This thesis seeks to place Robust Optimization within a dynamic framework. In particular, we introduce the notion of finite adaptability. Using geometric results, we characterize the benefits of adaptability, and use these theoretical results to design efficient algorithms for finding near-optimal protocols. Among the novel contributions of the work are the capacity to accommodate discrete variables, and the development of a hierarchy of adaptability.(cont.) The second part of the thesis takes a data-driven view to uncertainty. The central questions are (a) how can we construct adaptability in multi-stage optimization problems given only data, and (b) what feasibility guarantees can we provide. Multi-stage Stochastic Optimization typically requires exponentially many data points. Robust Optimization, on the other hand, has a very limited ability to address multi-stage optimization in an adaptable manner. We present a hybrid sample-based robust optimization methodology for constructing adaptability in multi-stage optimization problems, that is both tractable and also flexible, offering a hierarchy of adaptability. We prove polynomial upper bounds on sample complexity. We further extend our results to multi-stage problems with integer variables in the future stages. We illustrate the ideas above on several problems in Network Design, and Portfolio Optimization. The last part of the thesis focuses on an application of adaptability, in particular, the ideas of finite adaptability from the first part of the thesis, to the problem of air traffic control. The main problem is to sequentially schedule the departures, routes, ground-holding, and air-holding, for every flight over the national air space (NAS).(cont.) The schedule seeks to minimize the aggregate delay incurred, while satisfying capacity constraints that specify the maximum number of flights that can take off or land at a particular airport, or fly over the same sector of the NAS at any given time. These capacities are impacted by the weather conditions. Since we receive an initial weather forecast, and then updates throughout the day, we naturally have a multistage optimization problem, with sequentially revealed uncertainty. We show that finite adaptability is natural, since the scheduling problem is inherently finite, and furthermore the uncertainty set is low-dimensional. We illustrate both the applicability of finite adaptability, and also its effectiveness, through several examples.by Constantine Caramanis.Ph.D

    Three essays on nonparametric estimation

    Get PDF
    The first essay describes a shape constrained density estimator, which, in terms of the assumptions on the functional form of the population density, can be viewed as a middle ground between fully nonparametric and fully parametric estimators. For example, typical constraints require the estimator to be \log-concave or, more generally, \rho-concave; (Koenker and Mizera, 2010). In cases in which the true population density satisfies the shape constraint, these density estimators often compare favorably to their fully nonparametric counterparts; see for example, (Cule et al., 2010; Koenker and Mizera, 2010). The particular shape constrained density estimator proposed here is defined as the minimum of the entropy regularized Wasserstein metric provided by Cuturi (2013), which can be found with a nearly linear time complexity in the number of points in the mesh (Altschuler et al., 2017). It is also a common thread that links all three essays. After providing results on consistency, limiting distribution, and rate of convergence for the estimator, the paper moves onto testing if a population density satisfies a shape constraint. This is done by deriving the limiting distribution of the regularized Wasserstein metric at the estimator. In the interest of tractability these results are initially described in terms of \rho-concave shape constraints. The final result provides the additional requirements that arbitrary shape constraints must satisfy for these results to hold. The generalization is then used to explore whether the California Department of Transportation's decision to award construction contracts with the use of a first price auction is cost minimizing. The shape constraint in this case is given by Myerson's (1981) regularity condition, which is a requirement for the first price auction to be cost minimizing. The next essay provides a novel nonparametric estimator of the mode of a random variable conditional on covariates, which is also known as a modal regression. The estimator is defined by first finding several paths through the data, and then aggregating these paths together with the use of a regularized Wasserstein barycenter. The initial paths each minimize a combination of distance between subsequent points as well as the curvature along the path. The paper provides a result on consistency, and shows that the estimator has a time complexity that is O(n^{2}), where n is the number of points in the sample. An approximation is also provided that has a time complexity of O(n^{1+2\beta}), where \beta\in(0,1/2). Simulations are also provided, and then the estimator is used in an application to find the mode of undergraduate GPA, conditional on high school GPA and college entrance tests on cumulative undergraduate GPA. Note that the estimators provided in these first two essays require minimizing the regularized Wasserstein metric. In their textbook treatment, Peyré and Cuturi (2019) state that second order methods are not an “applicable” approach for optimization in this setting. This is because of the large scale of many applications of interest, as well as because the Hessian is dense, poorly scaled, and not expressible in closed form. In the third paper we provide functions to evaluate products with this Hessian and its inverse that have a nearly linear time complexity. We also provide recommendations to allow these methods to be applied in standard second order optimization implementations, and discuss how in certain favorable cases these approaches have a provably nearly linear time complexity. Afterward, numerical experiments are carried out outside of these favorable cases. When the derivatives of the constraints are sufficiently sparse, the computational efficiency of the approach in this setting is also favorable
    • …
    corecore