239 research outputs found
Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments
In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method
Pattern Recognition
A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition
Bio : A Mulrimodal biometric authentication system for person identification and verification
Not availabl
Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2
Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation
Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014
Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem
Chemistry on Graphene
Das hohe Auflösungsvermögen moderner Transmissions-Elektronenmikroskope
(TEM) unter einem Angstrom ermöglicht die Untersuchung von Materialien auf
atomarer Ebene. Mit dem TEM können sowohl die atomare Struktur als auch
dynamische Prozesse direkt beobachtet und damit Rückschlüsse auf
physikalische und chemische Eigenschaften gezogen werden. Diese Arbeit
beschäftigt sich mit der Untersuchung und Charakterisierung von Graphen -
einer einlagigen Kohlenstoffschicht mit herausragenden physikalischen und
chemischen Eigenschaften - mit Hilfe unterschiedlicher TEM Verfahren.
Einerseits konnte die Qualität von unterschiedlich hergestellten
Graphen-Proben an Hand von Strukturuntersuchungen verglichen werden.
Andererseits wurden die Einsatzmöglichkeiten von Graphen als Träger- und
Schutzschicht, Ausgangsmaterial, Substrat, sowie als Nano-Behälter
untersucht. Als Trägermaterial wurde Graphen für Nanoteilchen verwendet,
welche für biologische Anwendungen konzipiert wurden. Um geeignete Proben
für TEM Untersuchungen herzustellen, waren Oberflächenpräparation und
Optimierung der Transfermethode entscheidend. Die TEM Untersuchungen an
Nanoteilchen (Au NCs, QDs und Nano-Diamanten mit atomaren Fehlstellen) auf
Graphen ermöglichten eine direkte Beschreibung ihrer atomaren Struktur,
Größe und Größenverteilung. Untersuchungen von DNA auf Graphen zeigten,
dass die Abbildung von biologischen Proben auf Graphen-Trägermaterialen
mittels TEM möglich ist. Des Weiteren konnte nachgewiesen werden, dass
Graphen auch als Schutzschicht für strahlempfindliche Materialien, wie z.B.
C3N4 oder MoS2, geeignet ist und damit die Abbildung dieser Proben in ihrem
ursprünglichen Zustand erlaubt. In Bestrahlungexperimenten konnten
einwandige Kohlenstoff-Nanoröhrchen aus einer Graphen-Doppellage geformt
werden. In einem weiteren Experiment gelang es aus den Adsorbaten auf
Graphen eine weitere Graphen-Lage (in-situ) unter Elektronenbeschuss zu
wachsen. Weitere Experimente an Wasser, das zwischen zwei Graphen-Lagen
eingeschlossen wurde (nano-confinement), erlaubten erstmals die direkte
Beobachtung und Charakterisierung einer neuen Modifikation von Eis bei
Zimmertemperatur: dem "square ice". Nicht zuletzt wurde im Rahmen dieser
Arbeit eine neue Methode zur Säuberung von Graphen-Oberflächen von
Adsorbaten mit Hilfe von Adsorptionsmitteln entwickelt, dem sogenannten
"dry-cleaning".State-of-the-art transmission electron microscopes (TEMs) are capable to
achieve sub-Angstrom resolution. Therefore matter can be studied at the
atomic level, i.e., with a TEM the atomic structures and processes can be
observed, consequently physical and chemical properties can be derived. In
this work, graphene, one atom thick material with outstanding physical and
chemical properties, has been thoroughly characterised by different TEM
techniques. The structural description of graphene allowed us to compare
graphene samples fabricated by different methods and to assess their
quality. Furthermore, graphene has been used as a substrate, protective
layer, raw material, surface template and nano-confiner.Graphene substrates
were used to support nano-objects which were designed for biological
applications. Treatments of the graphene substrates prior to sample
deposition as well as sample deposition techniques provided the means to
obtained samples suitable for TEM investigations. The TEM studies in
nano-objects (Au NCs, QDs, nanodiamond with NV centres), deposited on
graphene, resulted in the characterisation of their structure, size and
dispersion. DNA deposited on graphene was also investigated by TEM. The
results showed that the approach of using graphene as substrate can be used
to image the structure biological samples. It is presented also in this
thesis that graphene can protect radiation sensitive materials such as C3N4
and MoS2 from the electron beam, allowing imaging these materials in their
pristine state. By using the electron beam to nano-engineer bilayer
graphene it was possible to create single-walled carbon nanotubes. In
another experiment, graphene served as surface template where an adlayer
graphene grew from residual contamination during imaging. Experiments with
water trapped between graphene layers (nano-confinement) resulted in the
detection, observation and characterisation of a new form of ice at room
temperature, i.e. square ice. Additionally, atomically clean graphene was
obtained by the development of a new cleaning method using adsorbents -
dry-cleaning -
Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014
Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem
Représentations de niveau intermédiaire pour la modélisation d'objets
In this thesis we propose the use of mid-level representations, and in particular i) medial axes, ii) object parts, and iii)convolutional features, for modelling objects.The first part of the thesis deals with detecting medial axes in natural RGB images. We adopt a learning approach, utilizing colour, texture and spectral clustering features, to build a classifier that produces a dense probability map for symmetry. Multiple Instance Learning (MIL) allows us to treat scale and orientation as latent variables during training, while a variation based on random forests offers significant gains in terms of running time.In the second part of the thesis we focus on object part modeling using both hand-crafted and learned feature representations. We develop a coarse-to-fine, hierarchical approach that uses probabilistic bounds for part scores to decrease the computational cost of mixture models with a large number of HOG-based templates. These efficiently computed probabilistic bounds allow us to quickly discard large parts of the image, and evaluate the exact convolution scores only at promising locations. Our approach achieves a "4times-5times" speedup over the naive approach with minimal loss in performance.We also employ convolutional features to improve object detection. We use a popular CNN architecture to extract responses from an intermediate convolutional layer. We integrate these responses in the classic DPM pipeline, replacing hand-crafted HOG features, and observe a significant boost in detection performance (~14.5% increase in mAP).In the last part of the thesis we experiment with fully convolutional neural networks for the segmentation of object parts.We re-purpose a state-of-the-art CNN to perform fine-grained semantic segmentation of object parts and use a fully-connected CRF as a post-processing step to obtain sharp boundaries.We also inject prior shape information in our model through a Restricted Boltzmann Machine, trained on ground-truth segmentations.Finally, we train a new fully-convolutional architecture from a random initialization, to segment different parts of the human brain in magnetic resonance image data.Our methods achieve state-of-the-art results on both types of data.Dans cette thèse, nous proposons l'utilisation de représentations de niveau intermédiaire, et en particulier i) d'axes médians, ii) de parties d'objets, et iii) des caractéristiques convolutionnels, pour modéliser des objets.La première partie de la thèse traite de détecter les axes médians dans des images naturelles en couleur. Nous adoptons une approche d'apprentissage, en utilisant la couleur, la texture et les caractéristiques de regroupement spectral pour construire un classificateur qui produit une carte de probabilité dense pour la symétrie. Le Multiple Instance Learning (MIL) nous permet de traiter l'échelle et l'orientation comme des variables latentes pendant l'entraînement, tandis qu'une variante fondée sur les forêts aléatoires offre des gains significatifs en termes de temps de calcul.Dans la deuxième partie de la thèse, nous traitons de la modélisation des objets, utilisant des modèles de parties déformables (DPM). Nous développons une approche « coarse-to-fine » hiérarchique, qui utilise des bornes probabilistes pour diminuer le coût de calcul dans les modèles à grand nombre de composants basés sur HOGs. Ces bornes probabilistes, calculés de manière efficace, nous permettent d'écarter rapidement de grandes parties de l'image, et d'évaluer précisément les filtres convolutionnels seulement à des endroits prometteurs. Notre approche permet d'obtenir une accélération de 4-5 fois sur l'approche naïve, avec une perte minimale en performance.Nous employons aussi des réseaux de neurones convolutionnels (CNN) pour améliorer la détection d'objets. Nous utilisons une architecture CNN communément utilisée pour extraire les réponses de la dernière couche de convolution. Nous intégrons ces réponses dans l'architecture DPM classique, remplaçant les descripteurs HOG fabriqués à la main, et nous observons une augmentation significative de la performance de détection (~14.5% de mAP).Dans la dernière partie de la thèse nous expérimentons avec des réseaux de neurones entièrement convolutionnels pous la segmentation de parties d'objets.Nous réadaptons un CNN utilisé à l'état de l'art pour effectuer une segmentation sémantique fine de parties d'objets et nous utilisons un CRF entièrement connecté comme étape de post-traitement pour obtenir des bords fins.Nous introduirons aussi un à priori sur les formes à l'aide d'une Restricted Boltzmann Machine (RBM), à partir des segmentations de vérité terrain.Enfin, nous concevons une nouvelle architecture entièrement convolutionnel, et l'entraînons sur des données d'image à résonance magnétique du cerveau, afin de segmenter les différentes parties du cerveau humain.Notre approche permet d'atteindre des résultats à l'état de l'art sur les deux types de données
- …