668 research outputs found
Feature-based hybrid inspection planning for complex mechanical parts
Globalization and emerging new powers in the manufacturing world are among many challenges, major manufacturing enterprises are facing. This resulted in increased alternatives to satisfy customers\u27 growing needs regarding products\u27 aesthetic and functional requirements. Complexity of part design and engineering specifications to satisfy such needs often require a better use of advanced and more accurate tools to achieve good quality. Inspection is a crucial manufacturing function that should be further improved to cope with such challenges. Intelligent planning for inspection of parts with complex geometric shapes and free form surfaces using contact or non-contact devices is still a major challenge. Research in segmentation and localization techniques should also enable inspection systems to utilize modern measurement technologies capable of collecting huge number of measured points.
Advanced digitization tools can be classified as contact or non-contact sensors. The purpose of this thesis is to develop a hybrid inspection planning system that benefits from the advantages of both techniques. Moreover, the minimization of deviation of measured part from the original CAD model is not the only characteristic that should be considered when implementing the localization process in order to accept or reject the part; geometric tolerances must also be considered. A segmentation technique that deals directly with the individual points is a necessary step in the developed inspection system, where the output is the actual measured points, not a tessellated model as commonly implemented by current segmentation tools.
The contribution of this work is three folds. First, a knowledge-based system was developed for selecting the most suitable sensor using an inspection-specific features taxonomy in form of a 3D Matrix where each cell includes the corresponding knowledge rules and generate inspection tasks. A Travel Salesperson Problem (TSP) has been applied for sequencing these hybrid inspection tasks. A novel region-based segmentation algorithm was developed which deals directly with the measured point cloud and generates sub-point clouds, each of which represents a feature to be inspected and includes the original measured points. Finally, a new tolerance-based localization algorithm was developed to verify the functional requirements and was applied and tested using form tolerance specifications.
This research enhances the existing inspection planning systems for complex mechanical parts with a hybrid inspection planning model. The main benefits of the developed segmentation and tolerance-based localization algorithms are the improvement of inspection decisions in order not to reject good parts that would have otherwise been rejected due to misleading results from currently available localization techniques. The better and more accurate inspection decisions achieved will lead to less scrap, which, in turn, will reduce the product cost and improve the company potential in the market
Digital Filters and Signal Processing
Digital filters, together with signal processing, are being employed in the new technologies and information systems, and are implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability. This book presents advanced developments in digital filters and signal process methods covering different cases studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide
Hyperspectral Data Acquisition and Its Application for Face Recognition
Current face recognition systems are rife with serious challenges in uncontrolled conditions: e.g., unrestrained lighting, pose variations, accessories, etc. Hyperspectral imaging (HI) is typically employed to counter many of those challenges, by incorporating the spectral information within different bands. Although numerous methods based on hyperspectral imaging have been developed for face recognition with promising results, three fundamental challenges remain: 1) low signal to noise ratios and low intensity values in the bands of the hyperspectral image specifically near blue bands; 2) high dimensionality of hyperspectral data; and 3) inter-band misalignment (IBM) correlated with subject motion during data acquisition.
This dissertation concentrates mainly on addressing the aforementioned challenges in HI. First, to address low quality of the bands of the hyperspectral image, we utilize a custom light source that has more radiant power at shorter wavelengths and properly adjust camera exposure times corresponding to lower transmittance of the filter and lower radiant power of our light source.
Second, the high dimensionality of spectral data imposes limitations on numerical analysis. As such, there is an emerging demand for robust data compression techniques with lows of less relevant information to manage real spectral data. To cope with these challenging problems, we describe a reduced-order data modeling technique based on local proper orthogonal decomposition in order to compute low-dimensional models by projecting high-dimensional clusters onto subspaces spanned by local reduced-order bases.
Third, we investigate 11 leading alignment approaches to address IBM correlated with subject motion during data acquisition. To overcome the limitations of the considered alignment approaches, we propose an accurate alignment approach ( A3) by incorporating the strengths of point correspondence and a low-rank model. In addition, we develop two qualitative prediction models to assess the alignment quality of hyperspectral images in determining improved alignment among the conducted alignment approaches. Finally, we show that the proposed alignment approach leads to promising improvement on face recognition performance of a probabilistic linear discriminant analysis approach
Recommended from our members
Nonlinear Approximations in Filter Design and Wave Propagation
This thesis has two parts. In both parts we use nonlinear approximations to obtain accurate solutions to problems where traditional numerical approaches rapidly become computationally infeasible.
The first part describes a systematic method for designing highly accurate and efficient infinite impulse response (IIR) and finite impulse response (FIR) filters given their specifications. In our approach, we first meet the specifications by constructing an IIR filter, without requiring the filter to be causal, and possibly with a large number of poles. We then construct, for any given accuracy, an optimal IIR version of such filter. Finally, also for any given accuracy, we convert the IIR filter to an efficient FIR filter cascade. In this FIR approximation, the non-causal part of the IIR filter only introduces an additional delay. Because our IIR construction does not have to enforce causality, the filters we design are more efficient than filters designed by existing methods.
The second part describes a fast algorithm to propagate, for any desired accuracy, a time-harmonic electromagnetic field between two planes separated by free space. The analytic formulation of this problem (circa 1897) requires the evaluation of the Rayleigh-Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is large, it can be accurately approximated by asymptotic methods. The computational difficulties arise in the intermediate region where, in order to obtain an accurate solution, it is necessary to apply the oscillatory Rayleigh-Sommerfeld kernel as is. In our approach, we accurately approximate the kernel by a short sum of Gaussians with complex exponents and then efficiently apply the result to input data using the unequally spaced fast Fourier transform. The resulting algorithm has the same computational complexity as methods based on the Fresnel approximation. We demonstrate that while the Fresnel approximation may provide adequate accuracy near the optical axis, the accuracy deteriorates significantly away from the optical axis. In contrast, our method maintains controlled accuracy throughout the entire computational domain
- …