135 research outputs found

    Natural Parameterization

    Get PDF
    The objective of this project has been to develop an approach for imitating physical objects with an underlying stochastic variation. The key assumption is that a set of “natural parameters” can be extracted by a new subdivision algorithm so they reflect what is called the object’s “geometric DNA”. A case study on one hundred wheat grain crosssections (Triticum aestivum) showed that it was possible to extract thirty-six such parameters and to reuse them for Monte Carlo simulation of “new” stochastic phantoms which possessthe same stochastic behavior as the “original” cross-sections

    Snapshot hyperspectral imaging : near-infrared image replicating imaging spectrometer and achromatisation of Wollaston prisms

    Get PDF
    Conventional hyperspectral imaging (HSI) techniques are time-sequential and rely on temporal scanning to capture hyperspectral images. This temporal constraint can limit the application of HSI to static scenes and platforms, where transient and dynamic events are not expected during data capture. The Near-Infrared Image Replicating Imaging Spectrometer (N-IRIS) sensor described in this thesis enables snapshot HSI in the short-wave infrared (SWIR), without the requirement for scanning and operates without rejection in polarised light. It operates in eight wavebands from 1.1μm to 1.7μm with a 2.0° diagonal field-of-view. N-IRIS produces spectral images directly, without the need for prior topographic or image reconstruction. Additional benefits include compactness, robustness, static operation, lower processing overheads, higher signal-to-noise ratio and higher optical throughput with respect to other HSI snapshot sensors generally. This thesis covers the IRIS design process from theoretical concepts to quantitative modelling, culminating in the N-IRIS prototype designed for SWIR imaging. This effort formed the logical step in advancing from peer efforts, which focussed upon the visible wavelengths. After acceptance testing to verify optical parameters, empirical laboratory trials were carried out. This testing focussed on discriminating between common materials within a controlled environment as proof-of-concept. Significance tests were used to provide an initial test of N-IRIS capability in distinguishing materials with respect to using a conventional SWIR broadband sensor. Motivated by the design and assembly of a cost-effective visible IRIS, an innovative solution was developed for the problem of chromatic variation in the splitting angle (CVSA) of Wollaston prisms. CVSA introduces spectral blurring of images. Analytical theory is presented and is illustrated with an example N-IRIS application where a sixfold reduction in dispersion is achieved for wavelengths in the region 400nm to 1.7μm, although the principle is applicable from ultraviolet to thermal-IR wavelengths. Experimental proof of concept is demonstrated and the spectral smearing of an achromatised N-IRIS is shown to be reduced by an order of magnitude. These achromatised prisms can provide benefits to areas beyond hyperspectral imaging, such as microscopy, laser pulse control and spectrometry

    Direct occlusion handling for high level image processing algorithms

    Get PDF
    Many high-level computer vision algorithms suffer in the presence of occlusions caused by multiple objects overlapping in a view. Occlusions remove the direct correspondence between visible areas of objects and the objects themselves by introducing ambiguity in the interpretation of the shape of the occluded object. Ignoring this ambiguity allows the perceived geometry of overlapping objects to be deformed or even fractured. Supplementing the raw image data with a vectorized structural representation which predicts object completions could stabilize high-level algorithms which currently disregard occlusions. Studies in the neuroscience community indicate that the feature points located at the intersection of junctions may be used by the human visual system to produce these completions. Geiger, Pao, and Rubin have successfully used these features in a purely rasterized setting to complete objects in a fashion similar to what is demonstrated by human perception. This work proposes using these features in a vectorized approach to solving the mid-level computer vision problem of object stitching. A system has been implemented which is able extract L and T-junctions directly from the edges of an image using scale-space and robust statistical techniques. The system is sensitive enough to be able to isolate the corners on polygons with 24 sides or more, provided sufficient image resolution is available. Areas of promising development have been identified and several directions for further research are proposed

    Magnitude Sensitive Competitive Neural Networks

    Get PDF
    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto detalle las zonas deseadas, definidas por la magnitud. Estas redes se han comparado con otros algoritmos de cuantización vectorial en diversos ejemplos de interpolación, reducción de color, modelado de superficies, clasificación, y varios ejemplos sencillos de demostración. Además se introduce un nuevo algoritmo de compresión de imágenes, MSIC (Magnitude Sensitive Image Compression), que hace uso de los algoritmos mencionados previamente, y que consigue una compresión de la imagen variable según una magnitud definida por el usuario. Los resultados muestran que las nuevas redes neuronales MSCNNs son más versátiles que otros algoritmos de aprendizaje competitivo, y presentan una clara mejora en cuantización vectorial sobre ellos cuando el dato está sopesado por una magnitud que indica el ¿interés¿ de cada muestra

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    Automated Facial Anthropometry Over 3D Face Surface Textured Meshes

    Get PDF
    The automation of human face measurement means facing mayor technical and technological challenges. The use of 3D scanning technology is widely accepted in the scientific community and it offers the possibility of developing non-invasive measurement techniques. However, the selection of the points that form the basis of the measurements is a task that still requires human intervention. This work introduces digital image processing methods for automatic localization of facial features. The first goal was to examine different ways to represent 3D shapes and to evaluate whether these could be used as representative features of facial attributes, in order to locate them automatically. Based on the above, a non-rigid registration procedure was developed to estimate dense point-to-point correspondence between two surfaces. The method is able to register 3D models of faces in the presence of facial expressions. Finally, a method that uses both shape and appearance information of the surface, was designed for automatic localization of a set of facial features that are the basis for determining anthropometric ratios, which are widely used in fields such as ergonomics, forensics, surgical planning, among othersResumen : La automatización de la medición del rostro humano implica afrontar grandes desafíos técnicos y tecnológicos. Una alternativa de solución que ha encontrado gran aceptación dentro de la comunidad científica, corresponde a la utilización de tecnología de digitalización 3D con lo cual ha sido posible el desarrollo de técnicas de medición no invasivas. Sin embargo, la selección de los puntos que son la base de las mediciones es una tarea que aún requiere de la intervención humana. En este trabajo se presentan métodos de procesamiento digital de imágenes para la localización automática de características faciales. Lo primero que se hizo fue estudiar diversas formas de representar la forma en 3D y cómo estas podían contribuir como características representativas de los atributos faciales con el fin de poder ubicarlos automáticamente. Con base en lo anterior, se desarrolló un método para la estimación de correspondencia densa entre dos superficies a partir de un procedimiento de registro no rígido, el cual se enfocó a modelos de rostros 3D en presencia de expresiones faciales. Por último, se plantea un método, que utiliza tanto información de la forma como de la apariencia de las superficies, para la localización automática de un conjunto de características faciales que son la base para determinar índices antropométricos ampliamente utilizados en campos tales como la ergonomía, ciencias forenses, planeación quirúrgica, entre otrosDoctorad

    3D face recognition using photometric stereo

    Get PDF
    Automatic face recognition has been an active research area for the last four decades. This thesis explores innovative bio-inspired concepts aimed at improved face recognition using surface normals. New directions in salient data representation are explored using data captured via a photometric stereo method from the University of the West of England’s “Photoface” device. Accuracy assessments demonstrate the advantage of the capture format and the synergy offered by near infrared light sources in achieving more accurate results than under conventional visible light. Two 3D face databases have been created as part of the thesis – the publicly available Photoface database which contains 3187 images of 453 subjects and the 3DE-VISIR dataset which contains 363 images of 115 people with different expressions captured simultaneously under near infrared and visible light. The Photoface database is believed to be the ?rst to capture naturalistic 3D face models. Subsets of these databases are then used to show the results of experiments inspired by the human visual system. Experimental results show that optimal recognition rates are achieved using surprisingly low resolution of only 10x10 pixels on surface normal data, which corresponds to the spatial frequency range of optimal human performance. Motivated by the observed increase in recognition speed and accuracy that occurs in humans when faces are caricatured, novel interpretations of caricaturing using outlying data and pixel locations with high variance show that performance remains disproportionately high when up to 90% of the data has been discarded. These direct methods of dimensionality reduction have useful implications for the storage and processing requirements for commercial face recognition systems. The novel variance approach is extended to recognise positive expressions with 90% accuracy which has useful implications for human-computer interaction as well as ensuring that a subject has the correct expression prior to recognition. Furthermore, the subject recognition rate is improved by removing those pixels which encode expression. Finally, preliminary work into feature detection on surface normals by extending Haar-like features is presented which is also shown to be useful for correcting the pose of the head as part of a fully operational device. The system operates with an accuracy of 98.65% at a false acceptance rate of only 0.01 on front facing heads with neutral expressions. The work has shown how new avenues of enquiry inspired by our observation of the human visual system can offer useful advantages towards achieving more robust autonomous computer-based facial recognition

    Multi-resolution Active Models for Image Segmentation

    Get PDF
    Image segmentation refers to the process of subdividing an image into a set of non-overlapping regions. Image segmentation is a critical and essential step to almost all higher level image processing and pattern recognition approaches, where a good segmentation relieves higher level applications from considering irrelevant and noise data in the image. Image segmentation is also considered as the most challenging image processing step due to several reasons including spatial discontinuity of the region of interest and the absence of a universally accepted criteria for image segmentation. Among the huge number of segmentation approaches, active contour models or simply snakes receive a great attention in the literature. Where the contour/boundary of the region of interest is defined as the set of pixels at which the active contour reaches its equilibrium state. In general, two forces control the movement of the snake inside the image, internal force that prevents the snake from stretching and bending and external force that pulls the snake towards the desired object boundaries. One main limitation of active contour models is their sensitivity to image noise. Specifically, noise sensitivity leads the active contour to fail to properly converge, getting caught on spurious image features, preventing the iterative solver from taking large steps towards the final contour. Additionally, active contour initialization forms another type of limitation. Where, especially in noisy images, the active contour needs to be initialized relatively close to the object of interest, otherwise the active contour will be pulled by other non-real/spurious image features. This dissertation, aiming to improve the active model-based segmentation, introduces two models for building up the external force of the active contour. The first model builds up a scale-based-weighted gradient map from all resolutions of the undecimated wavelet transform, with preference given to coarse gradients over fine gradients. The undecimated wavelet transform, due to its near shift-invariance and the absence of down-sampling properties, produces well-localized gradient maps at all resolutions of the transform. Hence, the proposed final weighted gradient map is able to better drive the snake towards its final equilibrium state. Unlike other multiscale active contour algorithms that define a snake at each level of the hierarchy, our model defines a single snake with the external force field is simultaneously built based on gradient maps from all scales. The second model proposes the incorporation of the directional information, revealed by the dual tree complex wavelet transform (DT CWT), into the external force field of the active contour. At each resolution of the transform, a steerable set of convolution kernels is created and used for external force generation. In the proposed model, the size and the orientation of the kernels depend on the scale of the DT CWT and the local orientation statistics of each pixel. Experimental results using nature, synthetic and Optical Coherent Tomography (OCT) images reflect the superiority of the proposed models over the classical and the state-of-the-art models

    Roadmap for optofluidics

    Get PDF
    Optofluidics, nominally the research area where optics and fluidics merge, is a relatively new research field and it is only in the last decade that there has been a large increase in the number of optofluidic. applications, as well as in the number of research groups, devoted to the topic. Nowadays optofluidics applications include, without being limited to, lab-on-a-chip devices, fluid-based and controlled lenses, optical sensors for fluids and for suspended particles, biosensors, imaging tools, etc. The long list of potential optofluidics applications, which have been recently demonstrated, suggests that optofluidic technologies will become more and more common in everyday life in the future, causing a significant impact on many aspects of our society. A characteristic of this research field, deriving from both its interdisciplinary origin and applications, is that in order to develop suitable solutions a. combination of a deep knowledge in different fields, ranging from materials science to photonics, from microfluidics to molecular biology and biophysics,. is often required. As a direct consequence, also being able to understand the long-term evolution of optofluidics research is not. easy. In this article, we report several expert contributions on different topics. so as to provide guidance for young scientists. At the same time, we hope that this document will also prove useful for funding institutions and stakeholders. to better understand the perspectives and opportunities offered by this research field

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
    corecore