757 research outputs found

    Multi-scale data fusion for surface metrology

    Get PDF
    The major trends in manufacturing are miniaturization, convergence of the traditional research fields and creation of interdisciplinary research areas. These trends have resulted in the development of multi-scale models and multi-scale surfaces to optimize the performance. Multi-scale surfaces that exhibit specific properties at different scales for a specific purpose require multi-scale measurement and characterization. Researchers and instrument developers have developed instruments that are able to perform measurements at multiple scales but lack the much required multi- scale characterization capability. The primary focus of this research was to explore possible multi-scale data fusion strategies and options for surface metrology domain and to develop enabling software tools in order to obtain effective multi-scale surface characterization, maximizing fidelity while minimizing measurement cost and time. This research effort explored the fusion strategies for surface metrology domain and narrowed the focus on Discrete Wavelet Frame (DWF) based multi-scale decomposition. An optimized multi-scale data fusion strategy ‘FWR method’ was developed and was successfully demonstrated on both high aspect ratio surfaces and non-planar surfaces. It was demonstrated that the datum features can be effectively characterized at a lower resolution using one system (Vision CMM) and the actual features of interest could be characterized at a higher resolution using another system (Coherence Scanning Interferometer) with higher capability while minimizing the measurement time

    Characterisation of the Physical Chemical Processes Using the Fractal and Harmonic Analysis

    Get PDF
    Existuje mnoho různých způsobů jak analyzovat disperzní systémy a fyzikálně chemické processy ke kterým v takových systémech dochází. Tato práce byla zaměřena na charakterizaci těchto procesů pomocí metod harmonické fraktální analýzy. Obrazová data sledovaných systémů byly analyzovány pomocí waveletové analýzy. V průběhu práce byly navrženy různé optimalizace samotné analýzy, převážně zaměřené na odstranění manuálních operací během analýzy a tyto optimalizace byly také inkorporovány do softérového vybavení pro Harmonickou Fraktální Analýzu HarFA, který je vyvíjen na Fakultě chemické, VUT Brno.There are many different ways to characterize the dispersed systems and processes occuring in such systems. This work focuses on use of Fractal properties of such systems to describe the physical and chemical processes occuring in such systems. The Fractal properties are calculated from the image data of the systems under the observation using the Wavelet analysis. Since the Harmonic Fractal Analysis can be relatively easily automated, the work also focuses on algorithmisation of the analysis and the removal all manual steps from the process. The automation have been performed by incorporating all the findings into the software for Harmonic Fractal Analysis HarFA developed at the Faculty of Chemistry, BUT.

    Geometric Inference with Microlens Arrays

    Get PDF
    This dissertation explores an alternative to traditional fiducial markers where geometric information is inferred from the observed position of 3D points seen in an image. We offer an alternative approach which enables geometric inference based on the relative orientation of markers in an image. We present markers fabricated from microlenses whose appearance changes depending on the marker\u27s orientation relative to the camera. First, we show how to manufacture and calibrate chromo-coding lenticular arrays to create a known relationship between the observed hue and orientation of the array. Second, we use 2 small chromo-coding lenticular arrays to estimate the pose of an object. Third, we use 3 large chromo-coding lenticular arrays to calibrate a camera with a single image. Finally, we create another type of fiducial marker from lenslet arrays that encode orientation with discrete black and white appearances. Collectively, these approaches oer new opportunities for pose estimation and camera calibration that are relevant for robotics, virtual reality, and augmented reality

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. Für die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail evaluiert

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. Für die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet. Zunächst wird eine vollständige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spärlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz größere effektive Atomgrößen. Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen Disparitätskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewünschte Information direkt aus den codierten Messungen geschätzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die Qualität der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten Gradientenähnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden übertrifft. Um die verschiedenen Rekonstruktionsansätze zu trainieren und zu bewerten, werden zwei Datensätze erstellt. Zunächst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfügbarer Disparität Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger Disparität enthält, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die Qualität weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen. Anhand der neuen Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufällige, reguläre, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. Darüber hinaus werden weitere Untersuchungen durchgeführt, zum Beispiel bezüglich der Abhängigkeit von Rauschen, der Winkelauflösung oder Tiefe. Insgesamt sind die Ergebnisse überzeugend und zeigen eine hohe Rekonstruktionsqualität. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, übertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender Disparitätsschätzung nach dem Stand der Technik

    LABORATORY SIMULATION OF TURBULENT-LIKE FLOWS

    Get PDF
    Most turbulence studies up to the present are based on statistical modeling, however, the spatio-temporal flow structure of the turbulence is still largely unexplored. Tur- bulence has been established to have a multi-scale instantaneous streamline structure which influences the energy spectrum and other properties such as dissipation and mixing. In an attempt to further understand the fundamental nature of turbulence and its consequences for efficient mixing, a new class of flows, so called “turbulent-like”, is in- troduced and its spatio-temporal structure of the flows characterised. These flows are generated in the laboratory using a shallow layer of brine and controlled by multi-scale electromagnetic forces resulting from a combination of electric current and a magnetic field created by a fractal permanent magnet distribution. These flows are laminar, yet turbulent-like, in that they have multi-scale streamline topology in the shape of “cat’s eyes” within “cat’s eyes” (or 8’s within 8’s) similar to the known schematic streamline structure of two-dimensional turbulence. Unsteadiness is introduced to the flows by means of time-dependent electrical current. Particle Tracking Velocimetry (PTV) measurements are performed. The technique developed provides highly resolved Eulerian velocity fields in space and time. The analysis focuses on the impact of the forcing frequency, mean intensity and amplitude on various Eulerian and Lagrangian properties of the flows e.g. energy spectrum and fluid element dispersion statistics. Other statistics such as the integral length and time scales are also extracted to characterise the unsteady multi-scale flows. The research outcome provides the analysis of laboratory generated unsteady multi- scale flows which are a tool for the controlled study of complex flow properties related to turbulence and mixing with potential applications as efficient mixers as well as in geophysical, environmental and industrial fields

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    Quantitative Optical Studies of Oxidative Stress in Rodent Models of Eye and Lung Injuries

    Get PDF
    Optical imaging techniques have emerged as essential tools for reliable assessment of organ structure, biochemistry, and metabolic function. The recognition of metabolic markers for disease diagnosis has rekindled significant interest in the development of optical methods to measure the metabolism of the organ. The objective of my research was to employ optical imaging tools and to implement signal and image processing techniques capable of quantifying cellular metabolism for the diagnosis of diseases in human organs such as eyes and lungs. To accomplish this goal, three different tools, cryoimager, fluorescent microscope, and optical coherence tomography system were utilized to study the physiological metabolic markers and early structural changes due to injury in vitro, ex vivo, and at cryogenic temperatures. Cryogenic studies of eye injuries in animal models were performed using a fluorescence cryoimager to monitor two endogenous mitochondrial fluorophores, NADH (nicotinamide adenine dinucleotide) and FAD (flavin adenine dinucleotide). The mitochondrial redox ratio (NADH/ FAD), which is correlated with oxidative stress level, is an optical biomarker. The spatial distribution of mitochondrial redox ratio in injured eyes with different durations of the disease was delineated. This spatiotemporal information was helpful to investigate the heterogeneity of the ocular oxidative stress in the eyes during diseases and its association with retinopathy. To study the metabolism of the eye tissue, the retinal layer was targeted, which required high resolution imaging of the eye as well as developing a segmentation algorithm to quantitatively monitor and measure the metabolic redox state of the retina. To achieve a high signal to noise ratio in fluorescence image acquisition, the imaging was performed at cryogenic temperatures, which increased the quantum yield of the intrinsic fluorophores. Microscopy studies of cells were accomplished by using an inverted fluorescence microscope. Fixed slides of the retina tissue as well as exogenous fluorophores in live lung cells were imaged using fluorescent and time-lapse microscopy. Image processing techniques were developed to quantify subtle changes in the morphological parameters of the retinal vasculature network for the early detection of the injury. This implemented image cytometry tool was capable of segmenting vascular cells, and calculating vasculature features including: area, caliber, branch points, fractal dimension, and acellular capillaries, and classifying the healthy and injured retinas. Using time-lapse microscopy, the dynamics of cellular ROS (Reactive Oxygen Species) concentration was quantified and modeled in ROS-mediated lung injuries. A new methodology and an experimental protocol were designed to quantify changes of oxidative stress in different stress conditions and to localize the site of ROS in an uncoupled state of pulmonary artery endothelial cells (PAECs). Ex vivo studies of lung were conducted using a spectral-domain optical coherence tomography (SD-OCT) system and 3D scanned images of the lung were acquired. An image segmentation algorithm was developed to study the dynamics of structural changes in the lung alveoli in real time. Quantifying the structural dynamics provided information to diagnose pulmonary diseases and to evaluate the severity of the lung injury. The implemented software was able to quantify and present the changes in alveoli compliance in lung injury models, including edema. In conclusion, optical instrumentation, combined with signal and image processing techniques, provides quantitative physiological and structural information reflecting disease progression due to oxidative stress. This tool provides a unique capability to identify early points of intervention, which play a vital role in the early detection of eye and lung injuries. The future goal of this research is to translate optical imaging to clinical settings, and to transfer the instruments developed for animal models to the bedside for patient diagnosis

    Digital Image-Based Frameworks for Monitoring and Controlling of Particulate Systems

    Get PDF
    Particulate processes have been widely involved in various industries and most products in the chemical industry today are manufactured as particulates. Previous research and practise illustrate that the final product quality can be influenced by particle properties such as size and shape which are related to operating conditions. Online characterization of these particles is an important step for maintaining desired product quality in particulate processes. Image-based characterization method for the purpose of monitoring and control particulate processes is very promising and attractive. The development of a digital image-based framework, in the context of this research, can be envisioned in two parts. One is performing image analysis and designing advanced algorithms for segmentation and texture analysis. The other is formulating and implementing modern predictive tools to establish the correlations between the texture features and the particle characteristics. According to the extent of touching and overlapping between particles in images, two image analysis methods were developed and tested. For slight touching problems, image segmentation algorithms were developed by introducing Wavelet Transform de-noising and Fuzzy C-means Clustering detecting the touching regions, and by adopting the intensity and geometry characteristics of touching areas. Since individual particles can be identified through image segmentation, particle number, particle equivalent diameter, and size distribution were used as the features. For severe touching and overlapping problems, texture analysis was carried out through the estimation of wavelet energy signature and fractal dimension based on wavelet decomposition on the objects. Predictive models for monitoring and control for particulate processes were formulated and implemented. Building on the feature extraction properties of the wavelet decomposition, a projection technique such as principal component analysis (PCA) was used to detect off-specification conditions which generate particle mean size deviates the target value. Furthermore, linear and nonlinear predictive models based on partial least squares (PLS) and artificial neural networks (ANN) were formulated, implemented and tested on an experimental facility to predict particle characteristics (mean size and standard deviation) from the image texture analysis
    corecore