18 research outputs found

    Estimation in projective spaces and applications in computer vision

    No full text
    Fast alle Parameterschätzprobleme im Bereich der Mehrbildverarbeitung und des maschinellen Sehens sind Schätzprobleme für homogene Vektoren. Solche Vektoren x 2 Pn (wobei Pn den projektiven Raum bezeichnet) unterscheiden sich von herkömmlichen Vektoren x 2 Rn dadurch, daß Vorzeichen und Betrag unbestimmt sind. Folglich haben Vektoren x 2 Pn zwar n Komponenten, aber nur n - 1 Freiheitsgrade. Homogene Vektoren erweisen sich für Bildverarbeitung und Computergrafik als besonders geeignet, weil Punkte und Linien, die in Bezug auf Rn im Unendlichen liegen (z.B. Horizont, Fluchtpunkte), im Rahmen der projektiven Geometrie als Vektoren im projektiven Raum dargestellt werden können. So schneiden sich zum Beispiel alle Geraden (auch parallele Geraden!) in einem Punkt x 2 Pn. Kamerakalibrierung, Rekonstruktion der dreidimensionalen Welt aus zwei oder mehr Kamerabildern (z.B. zur Roboternavigation) oder Bestimmung lokaler Orientierung in Bildern (z.B. zur computergestützten Erkennung von Fingerabdrücken) sind Probleme, bei denen ein homogener Vektor aus fehlerbehafteten Meßdaten bestimmt werden, sprich: geschätzt, werden muß. Ein Algorithmus, der aus fehlerfreien (oder nur gering verrauschten) Daten die gesuchten Größen bestimmt, ist dabei nur ein Teil der Lösung solcher Probleme, denn für die Praxis ist dies oft nicht hinreichend: gerade in der Bildverarbeitung hat man es sehr häufig mit Problemstellungen zu tun, die extrem empfindlich auf Störungen reagieren, zum Beispiel indem schon bei geringen Fehlern in den Eingangsdaten systematische Schätzfehler (engl.: bias) in den Ausgangsdaten auftreten können. Die Schätztheorie behandelt die Fragestellung, wie man in statistisch optimaler Weise aus fehlerbehafteten Messungen die gesuchten Parameter bestimmt (sprich: die gerade angesprochenen Probleme vermeidet). Traditionellerweise behandelt die Schätztheorie dabei gesuchte Parametervektoren im reellen Raum Rn. Zu einem gewissen Grade lassen sich herkömmliche Verfahren und Methoden für Punktdaten auch auf Richtungsdaten und axiale Daten (äquivalent zu homogenen Vektoren) übertragen: der unbestimmte Betrag wird beseitigt, indem man auf 1 normiert (d. h. auf die mehrdimensionale Einheitshyperkugel projeziert) und anschießend löst man bei axialen Daten das Problem des undefinierten Vorzeichens, indem man eine Halbkugel als ‘gültig’ definiert und bei allen Vektoren in der ‘falschen’Halbkugel das Vorzeichen herumdreht. Wenn dann noch die Verteilung der Vektoren stark konzentriert ist, dann kann man diese durch eine Tangentialebene nähern – und damit erhält man wieder einen reellen Raum, in dem man das Schätzproblem mit herkömmlichen und etablierten Verfahren lösen kann. Diese Schilderung klingt bewußt etwas schärfer als sich das Problem in vielen Fällen darstellt. Beispielsweise entwickelte Carl Friedrich Gauß seine berühmte Methode der kleinsten Quadrate für eine astronomische Problemstellung, für Messungen und Schätzungen auf der Himmelskugel, d.h. für Richtungen bzw. Vektoren mit unbestimmtem Betrag. In vielen Fällen (wie z.B. diesem) ist eine Näherung mit Tangentialebenen überhaupt kein Problem. Ähnliches gilt auch für die oben angesprochen Probleme aus der Bildverarbeitung: bei vielen dieser Problemen kommt man durchaus recht weit mit traditionellen Methoden der Schätztheorie. In den letzten Jahren wurden generelle Parameterschätzverfahren entwickelt, die etablierte Methoden aus Rn in den projektiven Raum übertragen (z.B. Variationsrechnung). Solchen iterativen Verfahren können unterschiedliche Meßunsicherheiten berücksichtigen und statistisch optimierte Schätzergebnisse liefern (womit sie einen deutlichen Fortschritt darstellen gegenüber der ersten Generation von Verfahren, die Fehlerinformationen ignorieren und effektiv nur bei fehlerfreien oder annähernd fehlerfreien Daten zuverlässig funktionieren). Wenn das Niveau der Meßfehler jedoch nicht niedrig ist (d.h. mäßiges oder schlechtes Signal-Rausch-Verhältnis – was in manchen Problemen wie z.B. Orientierungsschätzung keine Seltenheit ist), so stößt man schnell an Grenzen, die sich insbesondere als Konvergenzprobleme bestehender iterativer Verfahren äußern. Schätzung homogener Vektoren ist nun einmal etwas fundamental anderes als Schätzung inhomogener Vektoren und bisherige Verfahren berücksichtigen dies in unzureichendem Maße. Außerdem kommt die Beschreibung von Unsicherheiten für homogene Schätzergebnisse seit Jahren kaum voran. Was bei inhomogenen Vektoren seit Jahrzehnten in Form von Kovarianzmatrizen Standardvokabular ist, funktioniert bei homogenen Vektoren nur sehr eingeschränkt mit Näherungen und Sonderfällen. Die vorliegende Arbeit versucht, diese Grenzen zu sprengen und erste Schritte hin zu einer Schätztheorie für homogene Vektoren zu entwickeln. Dazu werden zwei grundverschiedene Gebiete weiterentwickelt und verknüpft: Richtungsstatistik und Estimationstheorie. Konkrete Schätzprobleme aus der Bildverarbeitung dienen dafür als Motivation and Anwendung. Zunächst wird die Statistik von Richtungen (vorzeichnenbehaftete Einheitsvektoren) und Achsen (Einheitsvektoren mit unbestimmtem Vorzeichen) vorgestellt, wobei die Richtungsstatistik nicht weiter ausgeführt wird und im folgenden nur Punkte (d.h. Rn) und Achsen (d.h. Pn) behandelt werden. Die achsiale Statistik, die bisher nur als Spezialfall der Richtungsstatistik mit symmetrischer Verteilungsdichtefunktion angesehen wird, wird dabei weiterentwickelt und achsiale Daten werden als dritter gleichberechtigter Datentypen neben Punktdaten und Richtungsdaten etabliert. Dies erlaubt die Entwicklung von neuartigen statistischen Beschreibungsgrößen wie Mittelwerte, Erwartungwerte oder Varianzen achsialer Verteilungen ...In this thesis, we opened the door towards a novel estimation theory for homogeneous vectors and have taken several steps into this new and uncharted territory. Present state of the art for homogeneous estimation problems treats such vectors p 2 Pn as unit vectors embedded in Rn+1 and approximates the unit hypersphere by a tangent plane (which is a n-dimensional real space, thus having the same number of degrees of freedom as Pn). This approach allows to use known and established methods from real space (e.g. the variational approach which leads to the FNS algorithm), but it only works well for small errors and has several drawbacks: • The unit sphere is a two-sheeted covering space of the projective space. Embedding approaches cannot model this fact and therefore can cause a degradation of estimation quality. • Linearization breaks down if distributions are not highly concentrated (e.g. if data configurations approach degenerate situations). • While estimation in tangential planes is possible with little error, the characterization of uncertainties with covariance matrices is much more problematic. Covariance matrices are not suited for modelling axial uncertainties if distributions are not concentrated. Therefore, we linked approaches from directional statistics and estimation theory together. (Homogeneous) TLS estimation could be identified as central model for homogeneous estimation and links to axial statistics were established. In the first chapters, a unified estimation theory for the point data and axial data was developed. In contrast to present approaches, we identified axial data as a specific data model (and not just as directional data with symmetric probability density function); this led to the development of novel terms like axial mean vectors, axial variances and axial expectation values. Like a tunnel which is constructed from both ends simultaneously, we also drilled from the parameter estimation side towards directional/axial statistics in the second part. The presentation of parameter estimation given in this thesis deviates strongly from all known textbooks by presenting homogeneous estimation problems as a distinguished class of problems which calls for different estimation tools. Using the results from the first part, the TLS solution can be interpreted as the weighted anti-mean vector of an axial sample. This link allows to use our results from axial statistics; for instance, the certainty of the anti-mode (i.e. of the TLS solution!) can be described with a weighted Bingham distribution (see (3.91)). While present approaches are only interested in the eigenvector of the some matrix, we can now exploit the whole mean scatter matrix to describe TLS solution and its certainty. Algorithms like FNS, HEIV or renormalization were presented in a common context and linked to each other. One central result is that all iterative homogeneous estimation algorithms essentially minimize a series of evolving Rayleigh coefficients which corresponds to a series of (converging?) cost functions. Statistical optimization is only possible if we clearly identify every step as what it exactly is. For instance, the vague statement “solving Xp ... 0” means nothing but setting ˆp := arg minp pTXp pT p . We identified the most complex scenario for which closed form optimal solutions are possible (in terms of axial statistics: the type-I matrix weighted model). The IETLS approach which is developed in this thesis then solves general type-II matrix weighted problems with an iterative solution of a series of type-I matrix weighted problems. This approach also allows to built converging schemes including robust and/or constrained estimation – in contrast to other approaches which can have severe convergence problems even without such extensions if error levels are not low. Chapter 6 then is another big step forward. We presented the theoretical background of homogeneous estimation by introducing novel concepts like singular vector unbiasedness of random matrices and solved the problem of optimal estimation for correlated data. For instance, these results could be used for better estimation of local image orientation / optical flow (see section 7.2). At the end of this thesis, simulations and experiments for a few computer vision applications were presented; besides orientation estimation, especially the results for robust and constrained estimation for fundamental matrices is impressive. The novel algorithms are applicable for a lot of other applications not presented here, for instance camera calibration, factorization algorithm formulti-view structure from motion, or conic fitting. The fact that this work paved the way for a lot of further research is certainly a good sign

    Unbiased Errors-in-Variables Estimation Using Generalized Eigensystem Analysis

    No full text
    Recent research provided several new and fast approaches for the class of parameter estimation problems that are common in computer vision. Incorporation of complex noise model (mostly in form of covariance matrices) into errors-in-variables or total least squares models led to a considerable improvement of existing algorithms

    Subspace methods and equilibration in computer vision

    No full text
    Many computer vision problems (e.g. the estimation of the fundamental matrix, the homography matrix, or camera calibration as well as the factorization method) belong to the class of subspace problems which are well-known in signal processing. They essentially consist in dividing a vector space into a lower dimensional data space and an orthogonal error space, based on the information available from a set of measurement vectors. This can be formulated as a rank reduction problem on a perturbed measurement matrix. In this paper, a powerful technique for adjusting the error structure of this measurement matrix is introduced. Doing this, the rank reduction works in a statistically optimized way and much better estimates of the sought entities (lower variance, reduced or eliminated bias) can be obtained. It is well known that a proper weighting of the measurement matrix changes the error metrics of the optimization problem belonging to the rank reduction task. But very little is known how these weighting matrices have to be chosen in a statistically optimal way. In this paper, we will describe four applications of our method (fundamental matrix estimation, camera calibration, factorization method, and homography matrix estimation). 1

    A Statistical Extension Of Normalized Convolution And Its Usage For Image Interpolation

    No full text
    The natural characteristics of image signals and the statistics of measurement noise are decisive for designing optimal filter sets and optimal estimation methods in signal processing. Astonishingly, this principle has so far only partially found its way into the field of image sequence processing. We show how a Wiener-type MMSE optimization criterion for the resulting image signal, based on a simple covariance model of images or image sequences provides direct and intelligible solution for various, apparently different problems, such as error concealment, or adaption of filters to signal and noise statistics

    DESIGN OF MULTI-STEERABLE FILTERS AND THEIR APPLICATION FOR THE DETECTION OF CORNERS AND JUNCTIONS

    No full text
    The theory of steerable filters (introduced by Freeman and Adelson in 1991) offers a convenient and computationally efficient method to represent the rotation of a specific class of 2D filters as a linear combination of base filters. In this paper, we generalize this theory to filters which are modeled as point-by-point products of multiple (in our examples: two) base filters which each are rotated by individual rotation angles, thus introducing the term “multi-steerability”. While standard steerable filters are useful for the detection of edges or lines (oriented in some unknown direction), our novel extended theory can be applied to the detection of features which are characterized by multiple orientations, for instance corners or line crossings, which greatly increases the applicability of the theory. We conclude our paper with an example for detecting the crossings in a checkerboard image using double-steerable filters. Index Terms — steerable filters, feature detection, orientation estimation 1

    İleri mühendislik uygulamaları için TİO2 nano filmlerinin elektrokimyasal değerlendirmeleri

    No full text
    Thesis (M.A.)--Özyeğin University, Graduate School of Sciences and Engineering, Department of Electrical and Electronics Engineering, May 2018.This study evaluates the performance TiO2 nano-film for engineering applications, both in biomedicine and microelectronics, were Chemical Mechanical Planarization (CMP) is a main process used for planarizing the metal surface by producing such oxide nano-film that is known to be an intact and protective film. Titanium oxide nano-film evaluation in biomedical applications is investigated. Electrochemical experiments are done on biomedical grade Titanium plates such as Potentiostatic sweeps and potentiodynamic polarizations of Ti as a function of oxidizer concentration. In surface characterizations on TiO2 films two main methods are addressed; which are contact angle and wettability measurements and surface roughness measurements. CMP application on Ti surface is performed by controlling slurry oxidizer concentration. Which leads to material removal rate evaluations and surface characterization of chemical mechanical polished Ti surfaces. CMP is also a widely used process for the manufacturing of the microelectronic circuits. It is common to planarize the metallic layers such as tungsten in the transistor via and copper in the metallization lines. The integration of the metallization also involves the deposition of the barrier layers, which are necessary to stop the diffusion of metal to the transistor as well as to improve the adhesion of the metal to the underlying layers. In this study, titanium barrier CMP was thoroughly studied by electrochemical analyses with respect to the tungsten metal itself. Ti has the unique ability to form a protective oxide layer, which was characterized and analyzed under different oxidizer concentrations. Potentiodynamic and potentiostatic scans were performed in H2O2 to provide more information on the passivation and corrosion behavior under the selected conditions. Moreover, the effect of slurry solid loading in relation with oxidizer concentrations on the film formation and removal during the CMP was investigated for both materials. The application of corrosion behavior of Ti as barrier materials for W based interconnects was further studied by evaluating the surface wettability, roughness and topography analyses.Bu çalışma, hem biyomedikal hem de mikro elektronikteki mühendislik uygulamaları için TiO2 nano filmin performansını değerlendirmektedir. Kimyasal Mekanik Planarizasyon (CMP), sağlam ve koruyucu olduğu bilinen bu oksit nano filmin üretilmesi için metal yüzeyin planlanmasında kullanılan ana işlemdir Biyomedikal uygulamalarda titanyum oksit nano film uygulamalari değerlendirilmiştir. Elektrokimyasal deneyler, Potansiyostatik taramalar ve Ti'nin potansiyodinamik polarizasyonları gibi biyomedikal kalitede Titanyum plakalar üzerinde, oksitleyici konsantrasyonunun bir fonksiyonu olarak yapılır. TiO2 filmlerinde yüzey karakterizasyonlarında iki ana yöntem ele alınmıştır; temas açısı ve ıslanabilirlik ölçümleri ve yüzey pürüzlülüğü ölçümleridir. Ti yüzeyinde CMP uygulaması, bulamaç oksitleyici konsantrasyonunun kontrol edilmesiyle gerçekleştirilir. Bu, kimyasal mekanik cilalı Ti yüzeylerin malzeme kaldırma oranı değerlendirmelerine ve yüzey karakterizasyonuna yol açar. CMP ayrıca mikroelektronik devrelerin üretimi için yaygın olarak kullanılan bir işlemdir. Transistördeki tungsten gibi metalik tabakaların ve metalizasyon hatlarındaki bakırın planarize edilmesi yaygındır. Metalizasyonun entegrasyonu ayrıca metalin transistöre difüzyonunu durdurmanın yanı sıra metalin alttaki tabakalara yapışmasını iyileştirmek için gerekli olan bariyer tabakalarının birikmesini de içerir. Bu çalışmada titanyum bariyer CMP, tungsten metalin kendisiyle ilgili olarak elektrokimyasal analizler ile incelenmiştir. Ti, farklı oksitleyici konsantrasyonları altında karakterize edilen ve analiz edilen koruyucu bir oksit tabakası oluşturma özelliğine sahiptir. Seçilen koşullar altında pasivasyon ve korozyon davranışı hakkında daha fazla bilgi sağlamak için H2O2'de potansiyodinamik ve potansiyostatik taramalar yapıldı. Üstelik, her iki malzeme için film oluşumu ve çıkarılması üzerine, oksitleyici konsantrasyonları ile ilgili olarak bulamaç katı yükünün CMP sırasında etkisi araştırılmıştır. Ti'nin korozyon davranışının W bazlı ara bağlantılar için bariyer malzemesi olarak uygulanması, yüzey ıslatılabilirliği, pürüzlülük ve topografya analizleri değerlendirilerek daha fazla incelenmiştir

    A Note on Error Metrics and Optimization Criteria in 3D Vision

    No full text
    All computer vision algorithms applied to real data have to deal with input data corrupted by errors, which leads to the necessity to select an appropriate loss function and to minimize it. The fact that the errors in di#erent components can be of individual size or even be correlated, makes a statistical analysis of an algorithm absolutely necessary. A statistically justified loss function can only be obtained by a statistical consideration of errors, or, in other words, optimization criteria that are not statistically justified (i.e. criteria that use a wrong error metric) may result in drastic failure of an algorithm in the presence of errors
    corecore